var/home/core/zuul-output/0000755000175000017500000000000015136704764014542 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136714766015511 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000320063615136714707020273 0ustar corecoreǙ{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~YڋI_翪|mvşo#oVݏKf+ovpZj!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`km(@ kV%g>a~W;D=;y|AAY'"葋_d$Ə{(he NSfX1982TH#D֪v3l"<, { Tms'oI&'Adp]{1DL^5"Ϧޙ`F}W5XDV7V5EE9esYYfiMOV i/ f>3VQ 7,oTW⇊AqO:rƭĘ DuZ^ To3dEN/} fI+?|Uz5SUZa{P,97óI,Q{eNFV+(hʺb ״ʻʞX6ýcsT z`q 0C?41- _n^ylSO2|'W'BOTLl-9Ja [$3BV2DC4l!TO C*Mrii1f5 JA *#jv߿Imy%u LOL8c3ilLJ!Ip,2(( *%KGj   %*e5-wFp"a~fzqu6tY,d,`!qIv꜒"T[1!I!NwL}\|}.b3oXR\(L _nJB/_xY.# ſԸv}9U}'/o uSH<:˷tGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?w:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIkmjX( s|ۼvb'/[=3=x_#sȯ?ǧPoUCE^mʰVIy`J-|.1٦@ږJQ&շy»QU5km@ B̠Yߒ`rH&E|@$XXg1/ 4s}d0!ֵ9lX #QYd.x3/s JHt**feYŇ\S=};CfN:XCJFYE10]@  2OFdh, :<%uab`~qyuy{xWyeZNg4.]fjoe7lZ*ULF"k3,K- 5|+X\k.;[ "w YbI.z܀e4[:eyA@)[E7/^ zzdM0 x4`<7\戌g=QJ) k L7 >9hX'AXw:|˻e{=,Yb?_qXW"Jzȁ+M8`咄9F}}5VB>^E8<xwΞ9@&wj 9uHM P!q>“St,= GJ R@1)t+Ш;q'TjosoUtt~2̤xG$(M/}ϻ/H^FIm:Sbmc*r}vW s6]uDܓ>9{w)b$ +r3wUC]ݷCww8wEɇinpEF0g$8Jq6\l\dM"qz}7[?izGD`E4;T2\;Gu\`>Qnf&!9i:y<0EkgkYtU)ʘ&ɻ@<#ΐ% @82v=q s#~0)kzxRJOAX;U_$w MIDLEq% KH~ Μ&kRIcf]H_x3(pb (N`Y?, 5{pEX+M.{7OV=es֎ %-`v˥C*&%K×!|-VpAQk@ȺswRJgdƷ A5xꕹꜱ =k{$*juch߾^YnkZX[Ԭρ*ȻGsJD=[F۴GcRآɰ.hٰ9FD4`b'w*P(5Ȼ;6$G8wYǷ&PNk֤"RXWdE-j.ZǺT/O޽<mݻրwxw*"R" (dބ+8.VN_'DJnxm`A1.zU١xAl]ᑌ,d(ꁊSQ'3\%KھlNv^ܶ+=jX`B^F΢CI.S3FS ֖ԻQ= Ylem^DEKCˬ SMpѥ,7!{tNp{l!$*:̟sȼ^N鶴+e3΄o*Pb~G̳bّn >xu(X6A`V#{JXCye±lqLdm^kMMuxO3IYGh>#{%$cŚRȣJW*]L2 Pπ·q{W}%)"9]|Py7nߖ})ZZ*q@пGe Tfp{7N:>^!X_*AAT+27fx-y"Z/0dց{ըCXʘZ֣Kf9U@!_(wC7:UC 2Zt7%.l 1Ɩۣ2j{Y*fVg,w!F Xb 6{wX]}:>3Kn0R|0WD .Z:MǗv4z۾~Us?Ἣft9su{Uzro7s2 gC1tJ~0i x'Yozi:noqJ(18po<0-קѴʀpf`"Ve_x4`T*l"+LwbWX*Zּ9ZI>'6a/LT/qB8KJ@.Gw!>H&=-r9Gֈ ^d6Ⱥ'D@|؜* ڛ:6焳nzxdys8]XZ~ѿ4ɔw. q[)S#tldtr^wķWjIH8gBꆸ({x0::Y#/MEe^S+b}UVE)kA`9zn#]*T)@xAZ5 /!ۼ+䄭J$ΧYpA:y zկ7my M9K'{IX9܊[:(l@#V$kٝMkm}5o򼏅7xWwoJEK3DNy å!KJ,Zt?Ǘκbv_|V*eMc(7 5ɤO$lF"Y39͛Ͱ~Ӿ%cم`wT<4ENqL2H/2JF"EcNM4M`ڔä>tgc_ E=%BJZZt5<_ɯ?]|"wW>;ٍ;+CU _n.ŽBԵ|{ěNfELl}bAݒJirJDիyR1}?5xE;(Z,1hA'OjD#GVi qZ,pf[d{2RܣR[mun~+~ISuȜfivId YO)c XrGMVUz:d@H5FI+ < OY>54n'R`RH+ d¼eb+M B)|*ZY8jz*d*)%Ϧ83I@6<8g#Y+yD̠p7yPBȐ|X>]<ˉjRI֏')z#;T>nn& lθ!$5/=%HD2fQ60NULy8Ǭ E[-S*|oãUrOt*bX؅#$wz )YH!|L:x Oڀ-_H (LQT6"*q2 iYiDHMק1ML4BZ$@z䑖6Ws7ѝ }TR*S?^sqٺ3Mh>屢Q7 |*yJBHKE)AFzL )fT>kU⢞G%|Ix;I1vnMdэ>o ү|rIaJ+U'Ed[1Vڄ[nUgu$-B6-[^7 |Xpn2}nr CC5F`J `rKJ;?28¢E WeBhF[|ݩSRO3]J-҅ 1,j2Y QuH Ύ]n_2a62;VI/ɮ|Lu>$0&*m.)HzzBvQ0h} -_>7^nya+CTm>C9|H nHe">1]8B*0)QsU·8t^O+mXU-q6EDö8^R) hct{d}ܜFnԴ.2w⠪R/#r| w,?Vo7o}Cw멷5k7v;;64I+OtfI|RM+T>y:1V(!L7,R3PEdջ;)Q_EAVح m3 o\` sHc3 vq\ .,ĀU|⦍߷/*~4âF,#[:XYIpʼn)dk!J'Z5=r!{ (y*␜*߾rCT.ӔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0qoI`b#FOf_$0[!i rSwvҍ%E Ec|U9F-)L)ŘF`U:VK jeFrԋwEDYpԽ-D\dNyj荊=EEg[bÔF˩ք%EGƶ*NX)Hc(<|qDOޯr^3>Uf1w;mCja:-1_kٚ%VbZ˙#G6 `q+MPU~l!.We$9{ -.D087?1a@P5B,ݖc}jc઱ fk}sR~  à5Q}llK`;b6a>@'@—>9V*E":v)e{KK{&0ݙAOu }nHf[+.4lX cU82IP qoz[X=>׻K߉J%E92g [ҙ%rXgs+"sc9| ]>T]"JNرWB-zJvS-~z30G.@U#=i7) ^EUB Q:!9W΀çM{?`c`uR=ljצiC9U"9URvÞr61y? jO]5_C!p| vä+*l:\APj|L"}lRg]nWPlf@ Ve`D~ڇŁQLLkY ZPKoa_u` 1>Z;3FRdEB n+0Z ?&s 6$E|<ޭLk1Yn(F!%sx]>C\l9"و- |yր|/#.w0ޓx!khD?qO`-9c| &8֨OȸVH5uH)28 Ǿ-R9~ +#e;U.]aD6GXzqd5y &;)VKL]χ@b OIAG Lmc [2;\d˽$MuWmCEQuabAJ;`uy-u.M!9VsWٔ Rs`S#m8k;(WAXq 8B+o?R@+' 8U˜z+ZU;=eT|X-!9U-q .AV/|\甝%&$]1YINJ2]:a0OWvI.pH6xMY0/M$ *s5xgs͙L3G$)ՖbG(}1wt!wVf;L|1ivGgJ+u7.c 6 ݹJ$)}T@ nS*X#rv6*;WJ-_@q)+?DK7Pll{f_WJ|8(A ä>mlN"jV;/-R9~ {]'##AA:s`uih _F% [U۴"qkjGX6)_x.8ͼxD(k:_\X%[p70\jۧb>h9U|A܌ЃECTC Tnpצho1=V qy)U cis^>sgvO"4N9WiI NRCǔl X1Lb.uD`X}nl}1:ViI[/SE un޷WQFa2V[%ZniE|nZ9&-I,t*qȎlo Lhnٓw'Xm R ύ-~ά}hs\5\T%;}]|Bޙ>ǖߔ 3\ a-`slԵ.怕u7ːزV|A\Qu&'9~ l}`pΕ [Q =rvQu0 M.1%[vRat/ Px"3Qc /[?$ PW*3'X liWv-6W&)cX |]O;C%8B*Z1%8Gk@5^NtY"Fbi8F'+1&1 7YSD1౱UR}UR,:ơ2lvc& GHwMlF@a5+F>ΰ-q>0*s%Qe掟^ CZQ&\b8$4Nve^آ]I^bKߙԘ1"+zH 1Հm -XıUXFr\A-2)  RG킖h?.āUŔy[j_ӂ~ яA弆^bDyz8ǖQ8`jXcsK?58,?YP5䜭nu9YFznTEf3Ja\,@:,?WYؾNr<V` =V[B5!Z\ļǪЎr8B*ucѡv8\[|s L-+y{5KDdzp `rO"mũɸHd"yc Pu>x2;W`VR<aۗ&D<=i-Rר|r _;ǖڽc?ߖ]G@Ֆ;UQG1 g3Jە Q88ASUȿ1:WѥLf2)b>~waIuR{u÷㺽~6wߍ<eyw=xUinU%MLmc [7rS}/绻Y{oup}]ܪ¨kt| c}O;ߟRy*4݀wn&8֨OreXMlk)=VzpO֠24hf 1hi D{q>n%̈v^nBi~MefZV >:/?/Ac 1M/ e9d ǙB?K_z>q%g3JQqåiw@RCsTI`eއE)JI>UlkOr\y$ŋrw/tZ7EdE?m\+F/*cӴ9].%R6o$QIן$;NGQئop8)#9D0VHU Gu1ʰi'49FUxjDHUxד-%2Vyw jSe6ޕkǪ7LmǪ H` Qr&*eQJrl!*JS.E*oFk=!*FM˨)zs:'<*_jDƪ0u75U Wm0ŕZ$Ķd:8,fg;̋PS90И}Nk6m_qEE^{P,Y`za♃9^ľ|n *Y"Y<ӥO I&X$`-1-X2\3 Ŷx4 ξ'qE1h1[4FިK3#}RȐ{"sp>Nmv/L&wa;uL+ `=hli\֣*?b@hÄ"i_y|d۳@74KG"kGn{iڿDH~}yeM܈Y~Vz?sBѨLa˹'^,yI=;pPTP8Q"fpF,m"P@fq%Z.hOTŴcCkYPE)BϬG +(pa1m!,ba`%,~v[a<M3 y g]iz2RTwRuؚLeLY$d^](x$ l= _qXwEdoS:ƚ57x )@ |xLњ,D5)hI3PrR-dDt)^I\PšTJ:Suɢ2WVլ%Kߒr5+o^QMYN2U*L,E+k )ujla QqMg]u-lUɤzOJ%JH'IS7~ ^N鿺k4 @JS _r1`btCEFa\}CaKH+>?e~*G ~yRh"y=m,'Og+_*d~lErbZa{t._ږ[JZ8AF 2hDFy%bVr%g.:GlݟSxX֚SsqţO\]OL'@Vy%拺:nB;4קMZ&ݩPPݡv%NNż[0P!\암ukS?QXîX~@ ~M͏RxsʧbKj㶖*|j=wl`sj8*ʼn+G_dy^,0su:"X"OA2DS[Cxt+`3qsX< vow9/W( ;|72q 4DA w7'K6]+ٔ1j湦3?` ܼYrC%ehJՅ 0oNW'~1RnѽA{|qQNj⣣0"#}hDsPBs83aI޾>W&%$G>to-W0./l?>Faɯor %ݱ*$#D?Q?yA =Qs3κCljmWT ٞVHKK`{چq&ƾ\m7h]f`\@ՈmoaB 4 í:~>ߊlwt> Lds#uQ Y#J65x)+ߊByHR ). P#ڤ!ݫa:Yi,]lIL9bm-q-딊 ˵.԰[HpSr.i^wtd%0T,{ÌۛkQ@L7IJjJ7!,Z*À 餜#8¾&P15قbuE Cjd%陚y6Fr+F Ovl*~y<;thISpy *J ڟ&v-Uܨ#laO.L/F$rv{g~{g;vՉl :i1Dtn֦Èz#bU>;LZOczJ6Fݶ+E | -tyl,|a ^mI8WwZ/$]z 1fZuLJFƳx{z}l+떦-P;R$F}SA-s6ݨj_˨(e߱*J0'|ޛ$[ @ӾAEԺCT,Ϟf&u<4!*Nol{ 綾͍!T JWd[!z\OUÓ`) g'݆eM[*oAIKrͯy֥t)γ.(Q"LW@ $ee"u|dQSm<@ t.H\*U6.\$LV,elR{E|ZHsUҐzK2SDC=Qu,$Ӓj Fl(0Ý\'nӶ/A8nxy&˛Ό?`όC]XE7խ}=˙ cIAUZ1_ot!m0I Id9MB3(N/#lj}2S(vh AXZC/p@-R[f%v @LQ.*zv\YVEFST!fc]I _wq.:&X ΚKU<򼃉Dn5 c(zu![1GKl?-d$hu=''e{G wy@Zg-ԥn<=uu;p}K^'˶(\1Vꯣﳮ˴m)S6zCbjT Vg8º !܌ÃB'E'E^/,1@ %Bw_:BmؖV]H0@PWP;dOlw= Jp۱Mph 6pTkjG4e^MT ZH`MKWx޻ކ~'H`}:Bڏߓ$ |9IF |$@lm P=_M4]NH3HJ@UF VP\g$kIBD$u$Ĭ(^F'E"⨯h=LW7-Ka0P޾xsMD[ƻ&b9PX^(߼=պU߀y?\Ty(0=j v*~ÏReWm2`!6vM qhsxp*^0] s۶+H7gy:vg")[^KJv@J-@lc8|Mvەry,?. Scvc7PDgY9u_=3a6݄=>]p+ 6| +f3CZ|;!LGu⫾ЏExFmDc[էʹne ({EϳWo'j':U{`1¬+xVJzvϔ:YU[\u&;szI]zOG@p)LkeV_8)j5OR /ftQU{gGE/96}PWlOZKGS ԭrYkk][tM[e`‡ Zo2؛wo,yrߧgo^pxӣE7cmqA;,zbp3ې>P N#[0V/!dy .%]$AA i"HlіG18I6l#0a޾bp̝$027퉼*t%v3^xزUg@d+4IPv910 $BA$SȀl< mp>)Z<,ҷ_0zJm 2m43 6UA$Sq2&i&( 70 kIL%8FmV\zޟt _Г-Bl Xo) ܯFcs=-@V5 `nI&[@1qG&v8ڀbQP R4guoG @ xmLql k[0 p &Rќfx]qZKA!d[FʖWO(D& ` ˆL3aܘ`8;?QtM-ɢvʾh*17 l*7 2h2,`L)&&%^bYۄ%ZT ,\Sl"D^dQh@v]@6Ĉ% < [N:CJІ50 k7DxxI=(ێ b;mu_(u_[#PA axl"E(ΆL|_m5x4;㥲ӷc8Hf׽4vMFtd;EΙwBժĆÝN}tBH?p;sneqR%3{J5quLl%UQiq6NSJ4lP>"?ؒHa,,v/sgs3z3PT|k>%F * pի;:[9gMlJ~]LML6k&WR@"38*/1H_z3!]OŖ/$/Grx](j F1*ѣsuTfj(z;l m{#d9љx*í#O/'Ǔ4uOlr7tzCw%6eꏋ씼߾O54;VKC[WK~ER\dTqcxB7Is%HF}}~tl*Pf$$vTylj}Wf^K=q2Aɬ4 47 eF+:./ej<aטp`LҬ20z -ݭ--Hܷ}pTcGvW {50cKH\nYu6>D(У6mӢURGX) Fa ww\.t.>3%$>t5@LwVp|'=҈'WY1$L'`eS i5y'B0M2 b,JSު$拈p TUU릨FI/:]z{ݢj_D0gk.Vu[ğ"J;[-6׶V[n%ۑYm{c>6&+Fuߋb?)Sg<[kǑПiH|pETk3( iQD>Ot6As8]kj#{N_$WWoTm[^)E]gHe/aqSQW:2(~UGZaNdSEMna/ZV$T*v TN؏PHB.R˘G sTcU4wv};]FWS+w VN܏THJ;jN# w Y'فPgwBuI:GHB v ۝Po?BG@N?PBuB v'4؏;@h;~$4@.uSV9T#hdz3aͷt4 t-o/&ӋѰ?՝ a1'OLv3_Ms\Nz;]ڗKW]+{:]wfK ҫkݮ夯bɠ Yן6~wQ;.n6 /J 5,WT0?s"c?'/(9"N\ g/iK 哣Xsރ ?ѨjTxvGʦ L7څbY>MzTXXtH6Տs/?I~~(~)X&hÚaR AeGTոzdZQ5Ea ,ѿjT''3F뭨LVgxt : O#2AmĦ߳d1PN]L{fZZVd*IEohd-j-Pk[+z9w+ab5"% ]lR_-uV=$SRڄG q}9WwPr2ċƸ8˔3սh|xm֫`'CG 4UeX֊&nwT!-З|U6x!) ]~2ĻY>@ I;mtDRyF(kuR UP(_<%-Z!TRjqp  d89˒Q<k:xQC@3wP@3mP3c},eo {ClªxTB*T4!@w߼S s @u9Y)!X!2U|>TF҃+NʅYhx[ @S ^^0#.6/-J‚>k9/.N\Ud94,4u5vɌ}':ĸԙN2El-{5lG >[1WQȅPg9XMVi_\2q:8/{6Ը$3Į {Bȶ)QD~i?Mon*>O柧sm y/G0^4Cq@N*XGx~QZ#h-G/z`q3 5X`C(ܚ+֏-As\وyh]&/ U؛م[l6UAUo{H_KsrV.hg@-F\a{߈ w贵00ER 6#B;-;i否Zb2Z(S28 $6FK[O|9E'wZ\%?6ga#sCE<}&dv/eIq:8D釂 \"“I**Kݴ}4엻.1[TsT`bXů$/ILS@;h M+#e'kUڅS`)":lqPdFX ૮ĄP"bT/IpEd[LJuQ8jxfsxJTBpTgG3֥ [+NYtDj4HnxJƤ% G$ރ![GD `\ U Ȋ J5.grpl'I&1#B Qqt BEy. ):a!:BNnQ})oD ^UEDf˗. 6J#` JZd  ڣH:3O{M]p~pZT 2趚Q{44)p6w.8ڱ8ʅB 5W |ǚD8R$]mǺ3NԤ#Pb|t6rhKQ|I䃳 6TbLC,C6|t 1NiA/V0V1 U <*|gVڤRmIv(`wݳZ"CmaB-ŲÉ ) EL c] l $Z$BVhc1!Cd@yjdz1R0kyEUs)\ff~)ޚ.G]X\Qq,0-!\_@[塆"a oPL |_Ҙ̪2 Q[e[2< %CGĔ2!% 2_:DU^u"J6$u J^c3K5P|Eї_$ς.8օKg#M K[I3~eߎS^qHq-QZ:I[w1xC,S$D_[_i4c$.ڀ`hStPJKpGLR$寷^wqtb´!S.1tNrI? ^utrmTZyjHR>SݳƩ9P`51F6 < J5E1 KsGHCy19Ly7Uǯ!ik/U[g1%N:)Sv. &e0~x@+6wctaZH:Kkm>Y'h߯ux?XIԭaڌ3'rQkefuV~3hzpEksDNy]b4eTni~UJ4YiGk~+Ա9gHRFNh(t\h |P<)bi.bm'J|iے_?\1iw"NSXݨsbX!D ;FCD͍K, 8$y>4MR >.8J~26isFPoJmºd5"@ )w5`e*1@㊾]k\1ltXH:gGoj"oSơ۬ѷ>E͒J)j|2Z=trpHbc(RC9!Tl>)HqahU_u#|>}|G'8:¦G⸋残8^G_/vW~YIvUD*zW5wŽ2ׯ7^z[Imq FWע W%WMGUhHbyRx=ӌ]ʒz;=849xŽ\Dwz 2afA_BZxm8dLt6>ﻸZњP[_ѡuq) 1 .Lx/ 0Q}[׃I3/%oe)ob6]pt=]L}°Y/iQ(Eîb'oku ]da2/6J]Ucͼ*i.U+YxZy h]@3awkJE )'yOy>\>~bQLԌ~2ATECF+$Z+xn: L6F<3A^o>F-^DHZ@\¶qn@3uU3NʄF?㻴PZ>{^52e}e e[{r5'%!H:guvq0ȟ,봁ŧn;;,,OQ'eS킣aWMY@n̝:7$֑#| Xu*}͵Ow܎={Ar4: ޡ+B-D hTM X,H)5ꂣ#91x h% ramʷ|KP*yq؄tߖqWVmUϘ)SoaM>o8د`KfE`2 5_.-T Q9|v'tKBxY}I7ݧ.Q ֈ[ϝj5diif+'/ofl >ш+A)FP΂kZBMߩeٽ}mlysz iuV)YuBMsRs\SL6˰*/W4OOie:0\V .(^F<-!w֣gƆWW{0bPn0#tB[=TX_8Mt.ދ.8I+ ZK!(g8D4*)#wLl jU_ ) WG!/\Ĉ ݼ愠,Xœu6Wu%(_mU-::ۤ-TB;/%1‹vuv~p,-JU% .x1Ftv}KUvuhQHcg77mU*I."4LMp Ss]l8ma{N 7/5<"L*.8&QTQO BD8"K!/"w佒FڱhR@a:i-D?ɠ#f N[cRfR(T@g;. /5x,rU) +*>x!H:'U_ohI5|"{mS(g+"# POŚu~"㭌*טDҼtaLjH[fΫ |Y.4T~ a\?/.0zf YΟ/c2vq09Ux?킦{",ekg6[6{,B~]p<&zFY0e#[ͭRAɩö],'#gJ6==eD) @jeVH$y y+bv+GzRyfHl^),!. rOi?y{~ܗmW+^JMZ>/U7&z fQg1'ђ pKxx-e2e ELorLR%=;4_BAy}U5Wf|L(q.J맱̂o&@xIh*ytE OV~U mV]ptObļ._oF?Xk\,?#r0>^o9سpv<)wBY38:x^\5Qc8&ػF$+g-)atzin02HS$Ev[7$U$Œ`%2NFFȋ('t3WifJIuDrZL$;ZXEŎM^)>Y$./?]n&$XY{X埿_mG[hvTvbF)ܑDSQv>J3[ΨjIv @jr 0Z*cz -ݛฯ3;s=Z쏦ޞaʡA1ۚRDb|4 "]붒@xu[k 봜A|T Y(Ey]î=?==tn8A*ov:r_ek6{ҹ2 pJ`0\w{fឹ=^ 굡ëѠ_Yi_љ7fKvaϡIͅI5^/HB 🿫bՇ_N>k?*O߼h@,.J7e_Mϔe7d\ò{rQ 18]'OO ؎/#dd :S[`R0~0b,>FYu.1ko1lq3cSQy᠈ɧٿx8 @\lz+f-ͱΐW|[ȹDlZ #$v禜¼~r 4Mu)=I0Js`37gDLj83s#Tĥ;\TV A̩:fC3:|2^Ln-61(\ "`-/w 'xXTM=͐cɍ  R)7bCs䦌T?.g߿}j|ډ+|[y`\a|.TԣQn<й<;O[P_,r{#A04:yߚ/f/e%!H$$8d.͔S2"cfw֢Y͢%o*>K&l6.H*Jin1o`38UMEcPJXFqPI 䴧7 S+Rj˦b5U3H [^+;@- ;Ҷ+] jE l6.hE |@OAL @9jSrchzz+A AmX!?wLhtm,}7"xiSM)M[d-&MQAIt]Fo@jYP ަ _=.,5!IJbn XAkM(F>.e묟+OQ 14֗5o(UݞT?..LƧSVk>i!t` X- UkxjIĴGvGEyslnp0 Q3M_ſ'զ.|2jPzvyBjk+9Z)fM~SmeV*0LVxRzkzP vѷ 05"De dGb,Y¿(0`}u4kaxwɀ]))X}α@;gtDa[AkynXSӲݭቃ>Ϡ ?W?&ˉTXg!g)E%m)\*5OK| ;B+iU ZjV{  1cbY}W erf?ɪ*fՇ"9g3 "ݽ◒P;`WYR#6p=Y{_I!!e:y=EqIkNތҮp.A ~ 4IpH!rͱ뻊w$DžK%DIgEl`;?~.*BUo<.{wQY]]1N?RpP:Ip4!P#F ̂!h yCS;fec袿Y(nvΦU b\ M=_ܻN+QƝ?]38&f,#ҩfX,1F\DqN@;\rC PcXcf4f4fH,P|nXM5û)S80#r8CT5fRqM;jM;fVܚʚj"{B54G"ƛy r*Gz`jx<"f4f4f`Mg,KF snA<EԨs fRy V1rJM;jM;r-5~8>Y=t^^10LjJ%+巎`„}Z*W6'-U[LsiuO1>,Gx'ڏW͛ʆ ^1p,땑)B{E:-$7à#Z_A>='@*[H \Ujue.bUфY jepH{K 3j؞l0pŰ7,5zJV,ʎٞ}}LT4o>2Nr*¸—'% 8a& Qe*,:G̏3 ͧ|uj]*U= nXu߬(3툰 .]DFoFyzk#02a$6=&֧{uo4q0f_oS; o$=u.](c([^Q 5c4:5T 6 y %ViHՆ@~6K2r4>M/C, TMM h.+fӂ:h.nw!ki,Pk#z%;g5_|?/^m$Bdk 4wԜ]iSsNp]a/)O~.陲\KH؎; ޹RcFyu{a\CMcѠ$J}IF1dJ2 F5%(.nMI&J+^dT\hqKM!;bJ}N3^KرƂPBz,;{kSivXS4Ou+ե|(%Ê. 8\G|deq$da4[,6Lx[9fxE>M[fk1 J\?m>e\C$W{ [>wy P:u s A L0oP.$(G T(Ƭb6:~3jayIU*oT ~n[ZwQ,Q,AP)?AAd< p(0a!`B'Zn8)vu癧1ENZaD9wxv~Sv E,+7JsPJ B`D=nze<>&KI6Q9|\6*aD1MbaD<%J[6j/+ӹFS9Р"e$&v9E-2>J.L[ Bv>]gەe}L2էå s5l&Υ. ~zMƗ?h,u,̳0gMzz՚h72v!?n)Cqn;ydغ_CW o]1C7-~mu<$.[J-K2C)䁇ZӍhkYL)S"e=m\ZC;% n=\A:W6E Z$AYmք.ɘJe֢3Zᚖ \тdbSCA1g@ n;r8ݜiڪZa'`2yۛ!}b>j HD@ɐXR/ݟcv՝Kl∕,nCxL~I7ZW]| *~/wժݙ"^{X|6,7,bEt[j큽gQ5ϬbH9nE5o sUPZl8k=mVv쌀bQ]9. 3dO Jm)v`:'jrtS|&d(DRd?j#Til0n`qT񜱠NXy ʓs DR)˃"1 -N@d |[<e \i޻'zEO^ e1o`.}>Gi&w@Z.fbME>/5J'9<4(h v_˲)h\[> xU|=P Ǖ ϰ e+̎r+/$CPAzWtbɽOw1|@&D[ؽTxCו`iμެۘߚδZ1_hy&\zY-ҬqHw_Pou4j#"=\0[%aL ՁpY˪ KK"NxP!}Exw&ͥt2Cw0Q.LdYԙ,f$# -I2B28J0(ib-P ՚!v_m7Z?ư3\M`hFE^8Zr3Ik^CtKQ`rmOdU-l[>b\"uF5}xOadϹbUzyaK*]ل,}diGIV70AN8N`I 3/`0=}ts8̑:0uK|fS@@ŀ+a {>ax:aX~pu`ZbZOWʖsk+3J'\=ET GXpbՇ%ASbVʚ(󶡐GW}E4Q1zO/}cG=aR̦\!^`Ƣ{΋ Ϫ@C/SFHx`AO1ܙ wwpgIWFDp~n4W?f{)a~%ƞI$b[ɷEvLRE}ho-+ZD k#Z8LQԚW/#4,`V6@]Fg.VvC|a"Zj0ӧJJvj]ە~+7_ac8&OK\~>[-rBl`V :vV2~~Y02%~"30ry),B#c!DHYfQt6IiRTp6g5[E3%;\zjx5x t0W8j..y~w/"љ }6zkFc/u$(\Tp?׮t.FG%'Ǝ5 tH՗zy.V9 }ׁ:%8A{~[4(>./ҕ(Qp1LK)X>Wyuy veѸpA3\y|Y.-ptR, xAo2_,`8-ld|~Ϩ+ 4X\=vj˦Z$Ì9 Nj)Igd#´.ʰ1K;7do5x 9q("BBFYэ$UUVR0?&{#m%2sȈ, a,c dq"Xi$ǶX H׋e!~"{oPN|&Q?Κ6po=WJCjd ךDaE`\M ~)q S p)Ll\4«+Ur\[pۍ9v1نۂƭ&4XbWQ-ЩKgh_O Jڨa=4lWbA2Bj0ћ~9[Gl|ÇU{G*0 2B$UNsoOw+]D6M109#ˇOͮ/oۢsAJ'+ʷ«qu#}7o1nQ|GONQ1{?&fQ)̾_m_`_ݬͥ1c(ʍ{ qlջKa˖n =/Prxg쒠t!1WE?50uW< D8TCG(PhcT4IfTjeLuXxE vwA vF㠼~PwEY~˓x;ߟ;qXpy"_nϡ 7GxP淁^w'ugח}$2v?zJƫFR^\+ %ozd"~}a52W|mU/FWl|Ʊdؠɸ$y6/)NbtVo=ށӵɠVT1X zKh7]ą%ɲnj S 4rC>Ϸբ[00M8A{VɓhLgx+Ӹv¶6lbi&0'CV i)Zdfƣ[93ML5f)'b}A*ltSZ{D=fz2S2Uo Vm! kz .2I#2D*DdME`Lg(FXBͳz?MgSbѴo!60-xw&cimUcV{=ͮTH$Va,$UBba]߇ٵ-Dݵ.Y\J8QmRI㲀d$0O;'c?ށ/^ݷ"O5[$w,ERr}FQ̌;Lhg7S?'ANO?=LS'}%6Tz0l|Zdw[|}M{_r8a-ԭ}%U%'5X(dG ,ɲHޝȢ'TM1>|H Ijd,(oq)dzjM0mq?pbIJlDP@ 1Q1e0!{#{;3/v⒊)$&95kJҠ{6CV fVԿ|bo1;&k'_bS]OqrN|9'8w سrTKVnjpGjHX`D|m!k̅,1>zwϳ"Kia-bY\ ,$V{CpXrELYIVP O Aie;ힷ=َ(q#‘f %A. II6sD㼣g9G0i,7Y_dzuOZ?]k q+|QKOѷvq S6^~g|W?_cbrڎz˥6[Bx.m"^ `k?CU2Zuq"r)2J|0vI< $xh;F^~JFnOۍۭ򵠣ٙ- i۟n ?\~ۿrT 5!M 7˫CE$Z^%(®bE=v7xe {p# Ln_N>V\uMIL?}~ m&q֏ඕMw|kz}۾FEdr&Y4/=wGߝd1Lz[tQc`x}~* Ss9yQg?Hsw?@CGCo7vyM}|abg 1x P68yMkƄO!E:bR4rA8iD4,>\cy_QYU!N*&ԊgDhYQ ݑiT5Hc'WF!Il~E:e%e 6ht~`>㌷$p1aj[# ;ٔfBR!kV"MCg51;ma-/K?|^ʷGPMLRiO=!8z,Oe9Oc͈#D'Qf"wOF0,x[;LٰOX)6I;]G?w>35|\+ j-cU: `BcC4>~b|"eAYnģ`Hb7#G#P;!4_nt{'+$|*oY9Ǹ.|qMj5FG.0b#ʡcX7c@S=*dFO%{p; F[}8XPš/7|qzLwH&Cg|@:(qG/΀w(^ `莶1;Tͳ̈́P60˿Rd]!R%.$N4~|*˅N# g@q <9,AӘJ0K6I`?<!)D!+c%X&ϑ V@G4wwG`SȬYn b;dCn뒬ϥr nbfp4[..(vCf}'A)[H{#ea#j_D("ZGE{ פ9bQQ!.hK;$I~Xd2?Ԏv>zAkd5ᄨ$-\cuʒÂfi5^djGcraSYnʱ5hoMT?m’\}Lx-zBu.\ 9(Q%St XrBYE#ؖ7}1נ1 j[ C||Em#h`C/^ `!~ƇbLYRuOL! 6Q6p-9õ԰ { Y0躸"|cZ}@/CoLrc8)߆fo/p3JG#1a2)$C(?MRk H7doR0=:{({{QI|*w|Qts=@f6׳%H8´<.&-͇ {dd3'>MOd hϣzG A%`8;%ZXo-Z7J(8  M|21")|>mo:Lx =(胞^q %84(l%KE)QdRT KOUE?_1E )K8 S(F%$ӆ;]m)G>jv _n6#@o4*C5fc-h´4qce!:D*"{;6lT"}5D g7&&.@$|I<F 8(zo8R SY.U χ;U7#`_wW?Mrf. q7r$*씖i=RҲ4EQwc0NH#>d$(# >}cM@#W UK]^$4"-q#bd1DA)|QL%<1d >= H4jO2W$T0Wx f*5eӟx/ c@k^@ӘLq3qAs3q+,L4*"xBT5IWLnJA腁jDRRitI|P@ Pb拠|]4BhT'lǖ#kZߌ3@aV]2hN6Oɦv*Sp:!B>[ 4yWN㼣щkOeyA=rm〷bG&3"=N7bx?2[q2!gq!%e RYil2UU.#H8`Q`f/͘Ɂ2kGuFM'Ia/lhF6pfDs>_LK^4*nL,ɱ6F~$7#eX1_mԎ$ZG8\,q>fp\Ř4Ѓyϟ U^!@&EcpL)ύdi.>X} 5;P5䨢$8j&J$:`=p!K ?(AhTBJ\н-8sGv=GT2Z(nTJoV0 ШJU8Yi\RL^(s g 2N+ja=3^uπOe>WXQEc0"1lFXkʩT ShH  AS^0RvAkK>?8d|&7lۦ  Od-)Xm⾿_.qDS2B HmZhs)NH P!R Q8淑fT S PvrIv2rӇX4),[O.5|Oiwc 1 `Ę3>bbL0J~k`=P>6czRN;*Mױm%^ `ul?C8^3 3cAwھc>ݳÞ6ްwЎQ 3j16׉Y䰠&<9M?NRނf EnAP~h1BΧŌ]Z '壴ʔq9׮ޏ;C]s ;4 dB.Q74IHS5I, ͍VSlDiFW/}Oeyw %rH;%Qqc;~"AW۫d*8&3tH;\|tvE}EHӂlaBfe :#Pq'_ŌMi| q/8ol/H;[ ],[F< 2 RN^?/uqDn~Hr[y&lF: dFzfBi*l%ĄŮȞ, x#v~5s/h 09ˈgRS;@Z& ^P J>_\c,8>d{~π ۱q[aRZN ױwg?T#㏸*J:*s:=,E#Яq\%y^}GSwǒ&+̳D_*՗<L(n ILrxH#n/$2,x~QdKҋF1&h|w 1L ƿO[1Gג/{$2&D*I; - 7ZyؼbQ(1 ~!yLTn?3Ҍ'#a5ċקp}{wC>(.^0#9v$Hӻ=bRZ ZHϙp[A$rp6hrNhapUSZe<0qdY=j-RSs XI eYr *鶃̌ؠ_xΉV 2; 'I,bpDftr v~)qMU d|֣|2&y ?@j@DVK ˀww~t{ƀ_F]>d#h(F ;bui_#,#Z0k#,2 G7ؗ%Pnr;<"O=E""zt(gR<ٶ Pv@l}yz]i81sk9 `]# _rJYf\H1 * #j| '3BT)c8S@"XE,x WΫ^5B{x'kϷN}N\/5lstZ?\|q5MYDEʀQ \x-)9 ۚ~!٩Q2Ɨ:(Iö{6F =6(ct}/H'|p%d5m߇ SX=NlFSPW%E/ť}&専\yׁ&ԀQ 9#Mk\:XL+i.va0e SZҾ==Iaq+囷p2V?g;ćW89d[sLRg`?m_]~mܟz81l=j][};jhɲ,jSNӌT:ENa8S+`n/3xkTx1+\8?aJ?UE ŔMfs2ur)9 (EE-K&}qsߨkYbY0#nk޿BgKtQ~+b`ɖ^e`OSU[E)IMJIg*(ˆilV0hp:2j)wEݾ%4}Q’-9#9NSf8qܿZ_%-IbDO<@~ʖ/T҄[!WߺV3z[(e=L3\ɷ7WiiCE*/y{(I`6]}4-+E8fu^rف*a{`6/&^UxDU)o/=AOw>c~uɴ9v&`i$b/O=="E;t4Y>8[w]8rs]uk̠d_#+VpT=10qNXH6<*a6T8U)%?Hr9F|!X""&)ߑ(gI*:\"k'ęT\4@00ea=XѤ$V=ԭPe#t2H/633Ok3"ƛ\8PXĒĩg,ٗ\>wdRKEZYc4 ..ݴs\*g}.4of!j b9L\-[ ,Xl1Ӂc`WYfeaġ7f/oecu5`˶WGrEG\sVӇO3VsK)Z23TfyS&3TYHUA]KOUWؙ7GNcn}%lȋ_F@}10qbzu2P=bڃv8_C)Y tsJBXdJT _XB8&Ȃ^u^J)Q$ba$` $S &W$O4Mt5OTɈ$Y鲃+OA]ߺQ~1(qdۀY}Oɀ&n*[M ܚrR'zdO05dŸ߮V/b^NV/zM;`yݤ,gO{NGZwC$fN`%z|9؎{Y-=MJ 񲰦tkj,6 " Ow* jy?uvpGގ/~]Q~nUoTyퟝs|`>4g@~bx }Y?.M~qFrD1zo>l{SV\LMަ /^'_O?7p5/eCߜtɛql5j0^$+b>{ʹ%/rĺ{}GCԘD8"Z#y:(Z[)X0)s% nR6SM6i~,w[.2|e߹OSXY=O Deu2e珏nP zJn2g|ћY ݧȕgRI׼KTߊ6fӚ?kWֲ_hmdJnU +^OixK73-/S6[o*g\[UX{VA]vev^yE}+f+˗g/v6:͗/$\{GgNV5 Zu jìCTS^ ?ÿ6wJ3-̬& tdm df6ڻ]|Z~쭍G q=T]?jɒ{K/7qY'e~2^e; L5P˿?{We 9WM^NOqDEj3oAJKXl#xdF`FFGyvxD-{t$e5C׬p o@ytM%o&h95q~˹gePd,̡a1 ŧ&k]Ӳ3{d!䆡<[bz8XP\:Fe+3k571nM4hqKTM+%OIR;J䆃 QHYhx)ܖR V;,&':ɘ}kgǦ5[1]bӐ;;^'vŒ߰ Ů]Њ]4wXBoh%wc{b>*9u28ƔAۚ2.M7pE;]aFlxo$޺=<ױ*k6LC`2 m Dm4O'.3 :KEIMgG1]\|Po#>]Aoh|8q 3F~Yf̻ґH}{c /7vD7ޟQxf3SԨMG+zY (`qEU;pi 9L6Df\՝[! {o<<@XAp'UDp_?=oTky|KQuXÈ0d$e+BX N-R]7Ĉhgk7;'a x;kIzcC<o"Vk0Aok`\=`Lg f"x"xday,VEMDPPiƣDgq&+Jid ^E3z<8EPF*Z3xB e,ˉ:(Ʃ{ӏE` "ߖ`ӸX"PRObh4nQB+*.LNr\ݒ/t-M1mj̤kd2%1-Dm|Uĸ|U pcU"3oWsQ8n!u+w.o~o>l}k7N5x_- ܂*V4)*5Sb;xj8 P;2J!AAkǧq D1MN3E@5Dۺ`,Gi܈eU[q8_J4Gk5G1q(T$I'bSb4%'d*#ѷ3 xٟXF` jN2 MhB,"+4n"Rpi)(I K9E$F2:RR% zQ (`%j7ᨦ"R #ƞb~[]]4Y-_O( iQ;spE`7=PStrn!BݢOEs8(|)-?VsJQuHD|2xXp 7HddNKϦ{gM{`#߂y1NQ}v0oI (< I%Ȋ)IY+'5H)ur~}[dtRǧqQܣh-DpE}KXf4-6IHLec nY Fzxqnx>+lէP@}OF#u.\-7aCMܷsUxI`QƝ)KcfV 4`↊TtePU7͉k5Lඉ"QU/ wduC"dDP7y|\;6|z;dWQY cjC agM8'C&p\n'^< Wać205lfL413~}+Yqk*K q)8AREtY06w#ή.~(M:s+bf jOe`PU͢܈(TdQ4IJi!fluǠy{|M8 V(-,G+Ϛ'mp;1ML\F"MRjSNӌT:E Xj*5)PytqH؇v_"i $RꞖN~E[j'mFT"bX]]$ξJ5~mW'%P2!)l3 2LSS o'58K.AQwׄm 'vll:rA/N4y+.aW~ήcC͏&NVۡl<30XY>w Ya!>=?hk ]ˀ;=CCjjZ7l=}]|oв x_ '3pاns7s`ėpjfZϲwdT7.=iF_"U㇓Oghiz<骊:W߆k:lLaCFF9-J+]ͥZ?͚jAA:}J<ת URj| AJ1v6iy|{)g'TXw~9w!J'vN"VeP[L+p˾ڋIlHӘ%U8s}rb^#,;[:641ȉ48BHM̒ԤtQ#}En4ͅYvn6 LCcNa*3͘C4,͑)SNBZN=BtJ`Iԙ5|T<}d3_A3!H[=4xttFc@P U֍nˢ-1\Y4Y̬np4:9=g,j泲H|&Yu??~QڻWĶ? scPA@Zg(Siv--vOuUܺ_IvҺZ[wNۯmLF0^?u9>}/^a\n>uUo>LnfWÇ?ߜ>.k׿|,R{~$jX/Φ?]\=vؿM?i?N '.y{m̭H[LoυҰ\r{"F]vњJq2cJK5N0F)RU`aF~<-2أӿȻaI7wV#;en\(m1.)(qUWuXO_$r}*Po vuϛIJq1UfV*+pp5n,݃}y/VK;uC_OM坞_Rֵַ;jޗfת&/V/ν[~>c)mk 8}TE6 xB{WLڻ4^\.KXtt9iQ{UG ַ=ѾUW[@iY}uyx?|ӗ7 G5NkL|<;KYOC+ڣzYd:ݴ{,_v?eu`Wen^w\[Nk?%uPe8/DնωMeXP9'neΉHEEL$潚:J{wJ^T$/yb ruIAˆLYJe%~svȡO{}}}G΂2R-yx, /=@c w/:35Bq $:` gj7~,\m>X@ڞ>m 1"Ol})dϐ`}n K-()veUP+ϩFw3mƯ@N*+vp-'\y :e]er.Kmלbv%S @Ķ9}XM3aCZ8f`QYՆI Ah4t)Rd[IfAtLy-kBBOCϸxTJeɚt^j҂RϞ_!16zD!J ]GKc!albJ0%G%bL(TOVhlr pW"nb0)NSi\4RdB6b,̖幕 d*ߐ*8TA% hn5 Z6bTdT.QaWk! >R*xhLD AX>LnE_pDx* "^s;+(i)9M#f" $j7Zn48{y7Q6(/::Ew1}{8)rs6i(JX֬zeL7!kCc5l˲_݂>Z׀SPEOF-ƾKt'LE#m2"0*rgUB3]hZ<)3%#ZA4Tr5>6C,do]:;AGw1 E˧grKSq 4A!,- nkf;Z:>Q xc`] I%,VpĈe`2-,m-YOA/Z%ۡ1ƒFLMH. ʄ`Ib}T\6͜qYgCZ: D⧎^+kU렛/KIӈ+b+$L 0XVF Jr =;8GZ!8+f؋IJA%ʅbN"ATg$XzظyS"]rR7xm/ [O C2ʅB{`.0qlnw zG2̾PCcSG4 ly7mٍ(dF<rv?NS4bPS("3F%DAjц_am}'Ɨ.vsʊ߬ tywW"I&D䅮C;7O{!Tdc'ļWpfP뻻[t3h7-MPIg1)A7%w:?=%<DRһ-w{~+(y`nA6 ?젣CR" ɁhBQ366p -4<6DZ\0c\m&W_}p(E;4FvԀi>Cf+Vznn{~$jXUNFcjksQ*펽JF]9lRbXRRqA`dilJKmBfVquDicN#GvxLAmh-&դϦ}\-r'q?ܸ)kQڤ6qI9}G&syξJ_$R}|h[-uukL|ֻl|Vw]w~H{ Z`NKh/Tc_MH{rF}iuF==ϲ|8+Ve71||1'wkwyĀ!~yG>^6[n0=Dm7[hX. Ns@wU܂s]0ɲJ][o+7+B1M@0 03H2/; XmA[Kb[d+8vU[U.|:]HT6$GBM3_^_]*?aG,fu|C2&G^Ǖ2 hlޯ1 p30 qEZjkkSyd8_P kan` u o /IÝ5cx83eb'5F|3; .̲ )c=_WT ♉FF[ <)d^](BɈ(L .BFdk„k^IzrEik-s#6R,d*^S$(䘼n< [a%^2vqNKCxw bvǤ;#@#lڋX/!oB$Y2\QMQ!eDjjl$v%uֆLfT6J>w[}4Q`C~иW^7Vi|6p7*ۃxM%)[xT-5FX(* +C J9l);>KCxqBZLvA SUi r/)587`e>:8ѕ'F>Bξ2>B)L{ xh^$`a4ZĴ ]ZQ#9'~=@!(~;dFqՋV%\TM(A@XU.vREL:[\Ug%uvaHΚ*RWsfpY;0FN:Iv  芜w5>4&z~ub؈3`Z -ـ-3ި}ٲ>FςEŗ?~&0K *HAFVp t0A9_". JRՐA٠X88hy6gIz,]+ #01d7(P€jm6rrPJ7NhҰGapfX” Y}Hj^,F|aH>_?-Ru( Nu7eҮq=q=$ͷGap2x/L}҅P<5}JRUՈ׵-hm"8VT$0K0 ~2 0 T{H*H~~ͧ:x 2abXE `,51Ro+g4ep؟&€ L<$ڶ}:;!^ ̊.0q'N,-֮W4է0 (-*PGhOG>|O,v?}̏~DnܔLby*c'GO) &eh2iv6\Lw քVK pM_k/఩,WQhn2N^latqb=W5)ˆ,n):"kep(v{)iA 烰m#IRq( %`9%0C;t @%fBRp3'm˶MHJ/jN:bt5WUܚnxʌJnj NP(&P{VX>.ڀQŬQxШZ( \T IyH<ʂ#qHc% W}虶xAaX2ǿcz$EC?(~XjJ sd[yz] ɐ Y/4VIc*Z dGɲGfRtjdH<{g;tGYp4V[9ɭ7FVc* 嶖u( ?ӝwgnB{XZWiF8X6T_S<ӀFMROlzb8r0THkI3u!HnGQp" inӢ׬ʢzEX潈. 񌁭@A|\R1^ 9Q/BxeH1q1㩉97^);³-@mGapJH))9vF {ۆlT}GY|x< 8äMx|aTH5xbAR TڒekV/{KMOqW@YF)s@4ʕ|H Bd9Ԏ6j.}LTPk=;C/氱vq(DtlZBɖKa2,{ .y}@4 ZK*YþZ.&e_"/P~J<0lLLfO)` ( a?d=%\RіAiGlMm,+Bj'M@;1y f2lͣ08i0x )M6QXƼU9/+͡qy4I̯Bu0 lJ)8%Iax2۔Cζ-au ZxQL,)GYp nȾSz(Gʄ6Zq9|aH2$6&n8I67FJPozhwIt U,9H [)\$BZ2Ig~IuqOzS]$%ȤGap.0F ھv6k5c}_? G,zmG;KsRӲGap.s|.*C#ɸe;aU* |wzT-wV3(`:qzN[8uU:x'CDL+*3pMD3eZ&=I> R]JJաnY:\Z2w7ni;Ƕvݝ_m5v_UsGV6`jdSXψHH<]>ޤI:xGm%As-F~v7O= ;x2Av@0 Z_4tY3pihyFӛy *LP>| Rz8SkU y ŅE]os"I&Pdn_yl^/j@&Ɵ:Pꌩt !̌DZo{IN-c5-" op(ykC{ [7` vʃg}vPyݑkeMzPWhH.],b5HP^S &6Yva02f;M1ѠiPe蓲.BcHy?C gZ]CyN 6tW9QJ(k+@cC{NIyOj1T;AcLHE%A"NOz{- bX=ZeZ;AX)alahute53`WXiA 2TbLlv)A]"3 a  ̟oz3$< #g?=It( NҼ7$suTSd>=$Gaph~W08\dIRP6Yym4 79%u"eb*4*FO0.jMR݇_e(%3vrcR_릖m%)ƏaM`gzW9 fUf|:)z\,✅z }S͗y2,?ΦObrA3?HKo>]5[ 41.o߿#L~3]7~_d_FtpۿqۇFOcE$ף>{;ݍ`j/blz5[Ծ>{q$qlmgZbaw#4~~0_~aW1}:~[CEuM$Y2O\QMQ!`UIxrL3G/?m_gEeQ7aqzFZ1LBz0m=di?>L&G0"?=Ln%ؓorH#v[]:ipP?#*{']S(dBc7'x>Bg>%qmS(^]'P!|%pirsB 9)rNCL'Dɜ>B♅O[ 9*nRM 2/04qO B I$|BӾ'P"n[L{ʌ`Ƭs5 RׄaJ:!|3̲'P"<芙OQ)rυ.|B (d^}lzr'k⦩3\/2iD༆8Y*+.NMq^|=~׋9$-\/ba l*Jc01f6HN'g`{U]{oF*is7N-v& lk ᨑ%UӸ{ԃ)6)'v -R3sfwΜ94ϰtLeD=mFf7#Qqy^To'^{`#Art|?m>X t;GdڏSN }p.VlH1 3)4SM=@|Z!^׹we?uZhx)j{_\j~4 X}SA &bYxud؆4ORFK,*&K{^ZK~&뤵@Hu>Amy1uodYi7%35Sx@Qr4ij8ŵD;85LXj:te 78Yӌg'e$^uIo3֣PCcA &VRnB cFcνc΃L9 )j$ A턪osDmww 3>O7.@l1meVwH)mĆu*,0L =S3`w:B~3d(wNf%c]C̺.Kp.i"YxY0_QW("#/_:I xqcً_=?%t}y lM>ιqC2?ll;1,'1γXE f Ta`}s> gg9Tp4sY*h)ˌJ!,ʤQls4Nx/F, F_g NRT`ےTc%VCUϒ{R)(FjcˏY?w1q}ԙBNBK~ͧ?K&h#+ ,gEG&Z=!,z7Bzb{ɗ<&g$?yO6֛+0kK-qmmcckq_,'-Ŗ3r^q~6CPZS$ڄ97'A]БUnf"q@,ۻ:},W͒+UJRT_vjvkJqCv'ϋ2:;=iUZ-U}}KuݰFkYqʲ9Z6LVV FEFԱ$jr~GM; ʑXL]| , F_sk]h$z,LyjMNneONOyMoB(.nӅ{gw*BT5;/-|'f.K+SO~iQ獖@sM-+%Q Qu@6قIJXx>*atw_G=m¹a|#!EX T~ CPB1UcX`~r=4 {g+㫅O7%po0dBϐ1@C̻}6\S>7!]꺉fK]вS㳎XDw6` H)b 6O\sϵ9C$&:FSӛҌRJV,e$T3(TY΍R" ir kiIHR{:3$30fSJM[>36&}id%훑mBϵxf!m_$dKAKQ;8ePMV#*uەuRuucQ@`a 6AsD|e`S ;V`+}#eH)Q)A,`S\?$ f.f> ɼjLZAt2xd9eεH Ǘʼnxvq-,a/ 0V)7梁T/:=0M 1u=X1%G QW}>juGpVn P ܔGt6;| f|0YrrאHŲOp5zWU6p]W٨lU6*uF]efw5vΈ N p\HNU̯ʨ ca-UFtg8RnHq`M_W[}7X>K"Js,vn.&=,TxUN|VoOӥ`sf>vL`"xntJPexei?+/(\fլ$urrhʜ 5O(FtuZjtYiON}R+/=h*,e íѕu.(~\[r]%]>^x嫶`+}U)ղ _})Gb>\(7An9"D0ٲ'UNI+Z뵣||;ew94XhPQ|[K/X{:𘷊۠1Űd > ],FX;))qJ7a.R v[M vXbձ9xF`Xj4>0M-ֶNܺ!:-TW."0M7CQ`( z=`\UZ[MЏ^0vΞblUOW!;2At~4uԸp+hxٺ`8U2۽%zE+<6hb׀ϓ?W7gI7m^Z7-vy .5*^/%70Ӽn*@dh$Oqp!]cHQ7t7;!W߼yyo~s LWgoy .Cg~j2`g]Sjt[z15Mw_#I_ֽ>ر{a䩖"$>7X",)FAQyEdzK2+{!9BhH[k%$ 00ÂpuWԋ 9]S/^''݁2q(ƿ>@ {r.].`qIX9LĊk{Re#ŧ)TlkSD^IZ˚>iy$صI-$'&U ;[,:/"9XS I^8zd&R"d!5N*H@'Ge%Uӏ1LU8Q!Չ FH@/ |q 7N B ZqA*eRqZ[QoyeQVkjA貟I@WHTH #Ȭ  Vt=]IFf2b8~Qή{ /xV~E%!7A)D<1j~&.( UP`D#e*>8Y甼Yz(,qFs3ySp}&N;Xž_';~V~~.}lU^D*BiYIyuN0MKI 0Ŀ PquMןkb&t4(n'q}?<:i0+e,>8D `Nm@D@a:4$ҡ;36KpaKlO-!6g-*ӜXq;R2TKO7k%^DDyTV7?{;Eu.Kx)(B>)4:ͬtOlz7dABvM}|aVܰj+ODVѓ3q]K>- Pml,ӂ4CHvw8er;7&̴YŰzXT2g{C<' pae>G?ߑxITәj:SMgL53tTә SMgj:SMg3Τt3nfLI73f&̤gLyy-)0a`2nE, 0 awKOB\<%B;/'z !frYN3if9,4iN\rK=Ab>X9XخȩsEtt ֐$!~(c2%FBI"p HZ8N<8?MD3g8tʼ$e2g̼R `gό2g(+~fV H"dkHkE([?oL`FI@B.CA,⿼b.;md3l!)OcH4A)jҞ85*gJ彡Nc^H!Aֹ:cB%VH9!f29XAzg "mqÑƆ?md|^c8fl|?*8 nm7]7ٟkh$ql(BT(HKgZJjk`\pN˛;Go$?MܮկI]Ơ ]qoߜʿPJ,+C_[Čudӓwʤ-wva6|]Ȯ(wH9-yRx_VJϠ֝Tly01ږ,/,bC4y8"@} Hu N(?ǚb%H:-:cDE~#6\ϻߜHVg5t+Lǰ,ۨۄ[[T !NXTKR`<2AZ,HDMh݄o1=W9'˟V_xpp-ѝǓAq6œN>b bT?T$}y1J(Xi%7t&*O%xDSwaF6evmUNצb>)ӞS04Kqj<1Q13ID Lf265$q~LZvU&wR}aK|٪'|F'BʊSAYҶ)ø.~p>4`ZF `n9dLpݺ51LQ^h=o;NgtkH]iTh̉*_l?3; P 2}Eɹ(>gzY3}ǀ(o]K$zR6vނb/P9x K;eVX2zfVY=3gf̬k3z&OY=fVY=3gf̬3zfVY=3gf̬3zv2zfVY=3gf7ߊ "BfV !3gf̬3zWQ㑍K񮛛d*RtKT(ËBo^l|:Q${ 8PQg'F)j%4B˨K߽]D~3(Aד$ы#|*_x_^ϵVGo U("kb7zq!xP.1fٻ.R"6L__K+/o|\ ~GMO%ʼnt=(|UJBd=7y=8b;wW.ŏ>E_o+zדƟG\Jm\ckiB_}`cՊb~U^ʔx_vx֛3cRD&\u8C5ƙ/n8!RxurEh@J*`5TjQAW$Q@a2񋨟tTlO<4'r%%RRUH7  Dh)bJEQϪޗop^eeUˠk tOgzfu*% ~,8rFHUە +9D9H;5?% "]Qov&lHEʒAG;b;{|x!uߏ@%y{]'Q^ae_'p%*FI8s]uGV!ԊӮP$E.*ɏxz~ 3]\烎nl? Wb(v59:Z3$$|!3Y+˿jFSyuz Îq?|PNa~}5 x7G D(~ShG>?7KJ06GTҀ; 0ֺ3_BT'œD|x81v HΆ )∑#N$kz~œ)|dQQ$^`ր#EW~TwΰXC J&F%vSL7iS0xH@u.&͢+3Ai4HĨw)7pG8WEʃSG/W[Ylo7]gOoo qI9R#0Q$P δԦxq9E.oF6\yOYTG`UK (hő]},Z쭈N[hp;yGR]& VZ@Zs=J$)'UÄ, = 8)DhALdXjbE4u!!Èw'bD$Šy tPm aoQR:/w>NKz"~()1-~ks/[]ۿ҆[6kt!ŭ`p_AÕ_ɸjVjG3A҄v{ʳ%sد-K`QK- 7 ty#6o6A-3ZuqsKQp|հ $w Y^kmZ&TuύJ˗z鵜>]Dl5%k}a>m뿽H16. rF3:޻=Z>.ziFݧћZGe$!s=92ο|?bm%7+IU`alpg uKwo^]|UxO;fژ<3X:>m.Y ԺNؔ+lVY>.ڝIiđ ~. "N H.iY}iՅ xzYwohJ;qΪe\wkV(F 9G2G_4Z (_rQ54Z__:V,MN[|oռJqԳ:$.YҒzX)u6_J0^J sw:BTTEͶv=W@\Q\(q dl #y?T)a.)ަ"1+ʦ|9Ǖ;{mTm+ymq\4X2V ! :Ǖ]mz\Bē&=r4[*dr=&gK5\Y:bbS٦}+'Cc@ t@ OJWPJRpR9R0CEFu8-8e=/T@_rZX.@wXg{*wmy _ {ٳ@^K6E_mh$Gq߷hJ$&ǦCc(_UWwKg|oڐ&H(IeC|ÈĚӞ⛢ms994^nB6{ϡǮίkֵZv2[7[w4,T F,w`iŃ*O,ۓM؝nLnP:vGәs&7f5 oЛ l*fC>=I;&]!+e}R` v{L Ǚ3=*s;O>Aۥ! ZGK!zƌƁZ`1b"iZ+I?Y}OpsBŵ9M)LenΧ+0?Srp&^ކlF8Kc >mA&o t7!Qї|0?Zۿ4ʃw̪[Í%~M k ˿)Պþc5Xh3G*POSZ[}5++m MKn"#Jьd3k,PV8dquJ4\8EO(Pʍ"VFudXDcZ)"V?"D4Ž*za3&CK/ﱻI <@uFI9e!}mjK"qv&Bc&`,q:5ݸ4g m XQeE*_ H܀ j”8K(muc_ͿMy9q7FPJ"I'4( >4-u f60Ed@ g0y8uiݖèV!<:dCսqiO11lF)&%v{ɔf{ cP7n".$tBIɠU V@i6(҄HhXZ,3BR;\rit|hAȮzv-|1o,xW}Ɨi ^je0ۡ\{5ˁln!LzCUciPm*_Y. ` qF[KF a#"([JۙfMn,xKpIrL#m 8Fm B NA8)GP KpvfP4TJF9`-0qMضFT8G"W B AȔy洴®~KfFyvmlq}LQY`_ژ@IƎ*zRR BqJ(,AJna3* Fߋu ?O0&" `R,o-Ƃ 7` B[-Vp,`:Z.q'UG8%^Iy11? 6p,"iwGFщXnYSWщJ9T'לuBʝ1ڵc^O#64(&AFéL8 #7L!A4FQa0#m[d_g*LO#ua|kz_AqEi4N G̾/m8XPzPv4̣cg|LCkc$* d#f{j]zӑr:vUNӥN N6 c+0pj”Vg=5bZKa V]AGP5*Cz ,@1[,} *;.]{ҵ={t[[\3֖׆?!ի@*.iogo&["?ۤ{ib}N:TV;=C4>;d1D{Y'X2[|zb~mּos\V's +Y:ѳG{PCBiu,H_l^iynO'o'T_a^} !xKgKqⵣ8DWT_s#. vRu{j]Ca}aCE{'TG=1hlq1ןB NvH!tK^1] ;l+>sk7sA w ]wJubLmtvÛ?1+f2KCJRe#"g1DYh:=AKCwSA l֘W 1T%1f0"} v}Ac3A[+ҟ73[Us'iV/ Y w=2Vcwo+w+˙}y+Ơ/ج%@;|e(hoS?L9YsXy;Ct:W~Z?˲_' x/&wa`|h w3oJ.;׋nD7}/xn6+I+wS_O#rqiƼҚgXRN#rm[R~?,| oK:KS>6Tq ?~X_s[ʯ=R_.mv.P&.cR/1",DDꥦuGFQ4`pHS4懿Tͩn$9Ɨ-OH&V9]*&֭Jѽ6NV;>>PӅCnlR6 S#Dː|]Nןr ZEsߔl}or,.6> Z%1qKzJ.Wr5?o?uPq1VYE 62( F刊`) NPF2]xDj\5ӽ*`S^g&X Lէʧ|wִ&],qBi:0gUƬ&z3MW"<|rzXtO&MMr+57$նeq*k^G59+˨wL!vB\/)R)tB K:7B 6v/d\Yde_籃Jc1F13 Tqx1@oПU=+5n %_ܳhսZdl.3(LaሏҨ9.FyD\+o0AHQ{#V`'v80`k U cJ#68R$#Y@SJx8s:uH !hml?VD[?GQ"S@g43jbJRfh.۽岍y-rXǬ$T3Ridtw K{MWcV޾q Mm&0xPlo;qmm˿}gxхΟwtC7IQ\k?@%7͂+o{O<ݜix4I7H?%irӗCT}φl~̥"__nJz vЈ|RMmvڛR7z4 u'Ws)2LEu|͓䛓I1*wz6w}Jm`-T:!;cvxYwZWGf))COb!ګ TMR6v7u}mۆK<]ZشrOHʭ;IwXy0+ 0:9[3*:JQ=_QbO%^N VN*xKey5*UӠb21o9cJȤ,5,3boiT(봉6:RzO]YQtRअWx'Nwy 2a)[Y ']m=ios.sQ2WjN/NQ61CoG,ޫel?o,{fH($30pJɉwo/l.\$ƽ?s\ \a:'*1g\ysA}im9>WpIɑ:yBa*ϳG`fުr}r$iTuDxM81 qz9Ӝa6d! c@+HX lrM)8FҎY | 6Kȳ :B7SKzdQ^IZU:?nV|{[ ]G j^r)[P)\7^1H'DI eZ`c!UQXMwk,ܙ<& ,^u w +̢t5)TDE fEa/ZI ;8` (O })Iln_~, @Hz {f RZ ԍ&iG;LaE#P'wCQCFĀKrFsMD`" BFԜ 1RTWARGB{Wld?V3s(?B-Dͥj| l{*91HXFqӠ w)39b=2RDTwklcLT{ΰg pOn')(ԝ$\d B, t^>VÀP& -,^]ؼ]^lB,}( i 82,VSKٗ{뛥 7CQ-xDw%DHmu TH_ޛR01[͆.Y1{xvzU~pQ56lFYb30f@iQ KPJJծ敮!ˀܪ]J,} LXL|4=_ݟnNT*AWYj |$Ew ᐖ~~пO\:WH'm3?Jׅ E0v$29{h1K'#=|jEI tˤ(dq7W$7߾I??o/_p凷@~eZ=- ҄47_*`i*%+u ׬uPbQ`CfŅߤϚJқ#R1H~SU-S( $܊Iؙ΍ї(~Yw0md *0wHsZr'56X?6GeQ,!9NRW%!N-Vi+#K¼`?R rmyE$RjG)GX(ƜbκUcf*U@Vp,> ެ&)^;Lkdv:0@8t"x|M>on2\_yp+|\eGEžfyƬ ͟f3,4q9,]ޖAte0v5w;ނrj|O'1m4/^|c`Ths˭#1beP.2[/ =߅ ATNtīY H(򹖜N)bxЄ~2xC\bOKˍdڤ1dʨ'๜>foWPx`w{UD[F|!y0 )48}H|hW/*$|k2C\Pm b._9㄁Nd68p4.f fhx*Fq[踤a`xUX'jghx#mo!TL.xt fhx &\=^AG0@s=͙1j:hRhM ƁcJ5X+`tPA1*WkɏD"_"RU^:Bq0p$$2 0jӠ[PF$G"U/{1V>h`.*DNfD4\1`/@T-4~82ͪL%AEȲo:Ͱk<'#9L;)Jt |1VԤ;q88i!6RdRܳ-+'O`VlϔBIe?R֨R2N??-ds Ht*Tk1̫.wӪIS:"v{3glQهjVtIg;n-5s۷ޮ"IڠU2Չrs2 ɗKH,{ xG M'ў2x'bM،}?Y^3Z8t'^d1 3AHoQs&d̜Ep4[0~ZOc0-($wd1K4-[WDΏ?q>.WqRCd X m_8%+(5Og?iQ7/^Œk-3g`Jw5k"e!|8qb%ƖfDc3n56.,IUsǓn<^.'zzl028UdSMc+%Yd^-i-_ߏX/?,)u_=AEFң'g+eJ^70-[ܺO$Oj)WG{Z* rІJ  Fǽ88?%%/?}<|??s #~C= ~LHсMϻ7-T[MS{˦%֭z1;k]CnhJfn⠥:p%{xyr%jU!EHb:Ed /}`YʐAR}*[P"d!IA"`uP܄j:,um[ &#l[Kucwy |VD?I_1_zFء=#-h-z2tdDSL"H-'8&'8. YY%w*YU.Pv1*sO[*HziPcN?v}Ͽ\pioo_Nq0k|p\l@}9QoFB!az&^o7 M~n3<5{J܂on%z/}'Oup%_k{mk릱֞h#Փ!-RxKU{! TJ$}6'm\̨@ʅ@檠"D%EP׫B]7vGR5!\c(K*!t\, ƍ΁9uXuS^jz5ۤgU(1u_+5t<=?xuh~\z-H۴CTj8ҧ-dWљJJyXDpuBr\#y (Nj YyL,kVˊQJG"@"K5,:H)h/5gi>l尧M~uִhzTrXN'6]F$Si4;iH2dK"Q'';`q?HGQU״#?!H| *1{K,,ҶVlso'/'A%oT T*mkT] GaM[1 O+ Ŵ)1GQqDTUH`+z|;̻!`LhqjdRZ#?tJ{xM{ue򠄗x|dk|7̴!OEѵݺh=X9ζ,AeeHчPz"\}It06iL *ud:5:k^Ћ#@"m<ɠSAlC&&unL#sC>fg5;[4&)./ѪGer5)K.H0BmSvA$ rVb2HϹV%{lE^q|+En&?}ݝ\iz9O@`:րy|dV}L J.Ѩ`T% -2pY(ʙ1h5`!ցʾ" Xh1加lhIh l-Ḋub-Zа!N:{E:3f"ZI\&D.RРV"HpV(+*PmXCвӴN:M{JM{JQj""Ghg<zwwvo\1L-Ѽ,2:*Z~y*Jg9:[' /o'*clTyk2f2MJ:i0>*%$*y,mUԫF Bp'GXJNGvf鶺Î]GeA#^ ] Ԧa3{ -L䍒:Q"zdhL("'ee))|p:fNs[j[ipgG Qi\3=G 1L ;-D'kU=,{iEq8 2rA@41A1Ȁh;ٲ֝DH *=>G I "#M,0*e+gȆNT?^/}SN)}fD#BFF`R@]D_!Od :Ԋa'u~ϡv!"G0:yR =1t)+*>e-0pppoWk2Jx!:L+ekAz3o2de"'MX3C+Dtz&HH[a|(_k"@[FjC.(6F8:s8v`eD 2ѐ" xAN=SkvhMBeo1z1-5aq*RE婛WYrey ;l&A=0;-O[)U5۠tjC@Q)RVU N(#-\psӗt4N]R;nY۴q$x*җ mj0rqh?30Uzϓdz&-)cRJ$+LR PRtTjR$x'㯗o/LP"dJgYr }:ɴ@9:(]Ch-Td'?r=лϗf:zL,DG>[qɷΠ8G<уwi &Z!t4ғZ p}2rSy O+!y)\h͞8]}-8]G*Te]m3<"?%\'rtԗy[{PA|ȼFl>]&K -: HARVos|=%~£~]$/-%AE\SɻensAsɻ$HIX%{Y< )^|dP Zit.'a[0>+t⎞+\H)/0el rr,Iei <)K Y2!YWuھ!Vwv 1;7Tf|rZ/ҢO$X'%@ =:G 0gT:br<EBN_uËiaXPMTY5(2Ge`UW^epS'E@#uSsZHpOpTz4{^S]?"ZJf:gMH^Y.1#Y&CN f[mʅN#QLN; yI)B>wB , 46Kfi]H*C߶DZ|w}!a~~w-"ly7p1yEX񉼹J,_= f $o"|\ e߾+1i ,ԱLx{ۺGLהL{O93PS%ͽ蟳5Rr~:ų0OgqųO0 "#c<Hm#:F1n :<:9.RY>jv~{#RG>e bhf[^޽;DVL*.+qRC,7a5 ~Aٻ޶r$W|0 2Ť ,|J-߷x,JLYsѥbTI 4p}֘{͈lu=yjz1*83F\vAb6튣i ;HҶim -}nk6z1Q8eN>.&zrfsapV9bm3nATi)/W_^I().u߈=~5z^\|?7pz7o`RWBr>hʀ&UsK_ִks-6Tp{VY8{ ?o_oû^^'|/ʍ–..!io~ٽiM ;4-z>;m!v2jnbCoiܷЉ[lm܈e:_lNQ=Frk*-jgq?l@s vw?9爟$@ 9lЖY?vG˿a3ۨ ͢&L8H^"XmƕfJ=+ʰ;ޑ&Sgb+ z0|X_X!ԞP'řYmM<}<ڠK=:NX?dXH}"ٗ[2RBO@I ƻR!ZQ{Ŋ=z}¡Qc^d+a)3D8FtV, ]`\GvNsSLtC"sO=\4NP?~\>V[wķn:::ey/!&g)Zc%gBzBZ~鲢WGڑn%Iݼ:5~ O3uu֑eЃ![/O|z ;KVlun㺡rSN VBc.ƿV&im)yrܘLC*X b1Le.4Hte3 )솪֝n(ԉi4nR^21:a,yt6-͝ŸC֥$2 )k.,+UdTZp錁PLsVhb'Uwޞx>=ےuȖ8N<+e܃o;8aJqQݾվRBUg|@C$@! FRIo#4u9SRAUև"N77z. `t~^|#fw>,_cO/boMʏ4xveX_GcGշ.:g!ͱ;MEC{ DY}oljx8m oq:$`>oýu3ؖ|=A.v#-,FGwx:| lw'Hf'`O0,NB[Ȃ # FX&|{cjYһ^5n7maso"mqEVl:-wJZni8\Y68{SƬHpF'S*8p$@KWyu.lإC<+ׂevݵC.eeX&&KTdL8p_wN]YN@'}͢6oUbTV -,φh\l%:[4'0ҕ{(E%^zK2^'?+ sSvI֤ dNj U_XbBZ% eJ k9v-uh[g۾'=B0lAClkfyNn=?^aޑb iYm8/iL`0p]ApcG=sB3*Pu׭/ j)8!%+,.RYJ_XYm2d yf1!s5Zi*2PQB{u8=?9`p2??'[~fP0'g ) %dQ] Ͱ+Jv.3Q*Vt$؋V{؁%K}~D[-^O{ Pʁ2@,瘩65AQ-R ;ϤQC[Gm;{crr9Ѷ 㲞b bZAUlqqzܠ ='DO|RurDȁ&='Y&4 u.G:z0mqqc@UC7.L(7+qWv), wOZ>ͧѳxDtμQg|qǞuJ.%%37xlI"LM6]3DepVz=WX6]>ⷍaM,K+sddI+2S#m`s4j%FWVN@* E!l.Y-& <ʍB cl2z SZvmݡ*5㟇QMPbuțOK=N+?lrƦ&6{X-7m>*0U5YbcE8i&f8JR &cFM]&gp:EŔpIpe.I7\_^g^r%cr[\nvuIoYd這'ڈ ]Tx/zFPDm056s !eLNt3,vA)n 4 SL4( ZBeq,}WC8:dɘ5RjrV"H>` ɄpN3W kqvIZ'i'-a-o)keL{ ҂!cU>֝PP\ĊK?0~ ljk5|F^Cjid!M=ʚff).+%7S!sR j Jyx`(4F y+WIlL'$wCr'.r(MZ2apZpsd9[hRJ..0+$2/MbU悗%s)GGmP!, PMJڠ:HمrV;;Y#W;nX%k#."XyA(KĄɉm E0QFčA&ɊNjsYsE>砚ܮ&@ 1)oh*zfs/҇"nԪtI{NT\HhelLH⯎9JrFf`JLp8(&.]5.4(lL QgP΍1&UM`vF(/t/v8_TSXex+ AR9-ZcvFjxj4<ȒB+De;'K0v(Qm+t z5)kpp,B"lO1c~\HMl"s7&gKR>ɢ /B`Z=2}ƿue2!J1ec.dep&&PBGp(T9gպ]Y@ǣ4ȗϞ|u՛.oSney$RΓ&]rR6ܯq4KVЪc,rd %NO 3>;#zɱZַSQk>&k!i4Škt"AQOЌ? ԃe"YN gz8_!ه~:M X`CS"e(IMǀupFUu9E#BJdXˣ=*6E?d΃Whk#tx 8 ?l΂w61|CSuqcw=hI EysL'|n+)/%qeWP ESB Qs};9Z++“\[< ;y*RFpNϨe/-lvz/ Sk\B33Wڟ69[i\ o brl].\.`.!5#.k0D%mHD*ιCcCA/τh#%^IJB_,6Mgܘθ%?zݸl[u$6`HK9_iy'Wl$G]Q0m[™%LOЂ u\718jS6` #ҘdbtLJWR*TĕAH!&&\P əDa(禞dvCf7aTO Bլ~y\8/EV=|t%Ks"!r. b^8͸4!$89\ZiB  [y핸ݳ oG r%?@̛odeyO6aD3ۄ̷f}̙MA" j} ./@\* Q٣M1Q WwuFˌA9:h|ӁUٽL^tn=9v3ݷ>ce3phȤ]>OXOe 0 囖oNoyJA9M3#aj,e~լwlΕnQNE1G&d2-cmYo'NY$J}튾&f6}"Z CQp[qR7 \v];Hq q2` IBS ;4 ;)=B+OR g/Ξ)'T/F8BsYeIW]r68ixPgV8(CٝT oC(*P $9},Ssn?PݏSx!e'N4O Ƥx0If*s@ ^F *m",uD9VS#e.4n">0^8 6'QȤѨ]І`H!xl<J,DMLю4\veew_raLqQtOSc)3fc.ٕB=_MOjz:Kdr#qF"{vvC.rfGxzwg1 3;(@J8j=!HrR{|uqt f>pbuA0ߝlc;ԝw[ .ξX7 t%sޙ.rK$0/m?ݢBPjC2)u(­~h;}~1 h:> ?O %ڰ6:vt{v gmǼZ[©y%˝imʔG$N&)wٽ#3IXլ)yHTzx$EJKdT R D,V W=*ruR_hb/brDN`}Ңs [' %V?:_HFfGMcUIFd%MȐ*e={ o ~Mwjp8v@J;ͥ| [-,u  BG{;e̲x&+eh:\1$˭MT%@{O5sN4Q4t@lƉ( &'xT%=b?I:a)zRE1h! \׷=|>V0A=Wn/($|dq}ұJ+@W$rmʶk:7wx9{X@Q};]Rn)pHS>orN,? ;OY|x%rzb6 5Z?ƨdb@5tFPnJ Dy &g F:S*]Š<}ȭ!AH4Z˅ 4j0rcթIuhF UP_)Jr{BrYXi 9)z uH(iRVǠR)rn%kQ-w=`6ruѮ>F͛?K;2`0dα|< S"hfN Ą¨c"^C drղÒTz L@yL!Xdژs#u!@A+zh`_ 20HS& *U`Ln%- j3u9=Hb- H"\=r:04ZEL0ǃRL DL &ႩT1+X#@ڂS!Ζi)Cg}u3|{%ᨹwh[M#F7[CwCeoW{4[ɛ9Xێ $Р=C H穀^3ftp9RJK 4wE99kT4*c,1gD#k% RTW"S%Ola.6g>Tr~W[a]jbGlWq,KEJ%jߐUp(Թ'ѦJD+T IieBlI΍+N͢{,WױGR jV)&\ ɝ4e/ee!"BOh-pV>(+P aj<i.b6p*ͩ4D,6-QRcVnej~Q.t@ZU6$Sf #:X%LQ qCWʡ)2+ -? NE@˹8BT3MZR[JiJY"qKpItrlԖ3Kqs"!G1 C7 {$ bԀ rb /gŦ%^O_5OF{Abn% H@ %/[ȨmoY$ |~-?[cw>A08!(UFQ*8 ED A-?vRk'-58ղqp|$<6TfHT^䎏ZB iΤ.HE5ZQDpT+b4JZ4CE IZOXŬ. )jn 5cPm+m1h@s`2;/+*1HMV.(JMRTQ2`-Rm}:a%"1"sKEઆ e#-o)&VsᠵƀlQZ+S;)/R,PAB2#0e {c1օ^F!r5A/XwÕ@mRPK^@y֐H'ldtlq5+m)$cE?fHL^r쬭# fZKjAfx2:Jw?_ y[cR;ĬW逰 IL1[,} *[p9˱Dg/#oVrCb,EdժT{2tt|uϰf\ڠk{FDVn~zY5d ,x5TKPYeHБ5R`{΃ ,7z8ӂ}&Srft=›wf+ F ]^?]L6^7x uԺ^=lYv-kh9߼U{;OR z^hfOWoּ oTB{N;_mc%.zENϚnz^pЦ)6g [ϭ{D߼7/y͋o^|߼7/y͋o^| I|_ k&,,0&dʃ_+ot#Lk1 +vǢǤ-SSf=ճڏW $3 2!LsDC C̐c1 Rbp1!S&aJ;f2RPWZZ^ڂ<}Q(t6 1nW-4<=x-z4|twդj yBr Xo6R^Քqȗ!r0I`Zj׿lR͏%:rWTxGDJNcRɜRafXT梕0〃FybrǖpO ͨ7d bM3]2xH xu[, !j$w$+8Fr%7WTFĀKrFsMD`" BFjxp*ɚ #emag{.$7Y?TC;7~3Bą#s`e'"'R(1EDNø~MוF!QIR}10k(1Csa\x6`? ̞rgzZr&S^#H{O.ѩbLӄsۿ>j ϏB_gJ`PVtb  OW0װ!=wƀ'`4ߝ2 |m#uǟuuaZA2%fMk%۷ŏ` QFpˑۨ 9`7/m\8F'7TK9L~niں4OaF k=Qdͷ {gfoE@j_/]gf0~IΑaH0 :Y$> L4pпmts3f7^8EPuȮQJ7LZ=7ihNo^ R _ ۃYKZ#W~',g8:ov.fn@WŒs9Qm8ri HX81>pr$]SHS`r3`'㻓'>|;Q`ClgtŅ?Ӹʐ[+}\Əl%sXa[i;Ǚ1`w7`,YT`#vR7i>^3g۩#R%N) X8Cs 7%!$Asm*oSyL b S6kùDJm"6 |4>AQpSYLXjCNF/:ugiu Xo5xgo\VkWs~:΁F%ZP۠LmL )&c0VbO^+3Rj) >:KF02S-v &)*Q`|d^TߝAy:etHU3//=t\4e[_E>=Wz-ۭkccI,0.0yMrS0hQ'Ϭ&Hk#K/ ^BͿ[rb&dF/A.*bib šn{?F~9 kY0Z`D$4*a'N APRc7:Ku!X )ż%Rd }7<1ahLU$n9 Q ƑalY8mO(U8Qa\gx5h-p^ Ks5\f"0/\+FcV@42(Uss5>z¼S텄ió|ɳ*Bɷ[=moD~}h);r3^v!jc)Cr,=(Rjrˢ sbLUո~&We;?c|5VM2cЛA%L8 X`'`F"`"RSp"("A #(ml֯~@W"zXX Km3jwbMm/cu_*,Ȳ7oovyH s܍`Uv; , z%O9*||߅3˜1ǘKu:3cxbXyۊ7^9NR& l |mxzn`Z˦HCrK!"鶫bwn&հYmv<x SxĽ֙k^x3,vΜ6UfPQ j;6~3R L+%7u`k*-(4I&fǫMJZ +)'cL W ǩDxlA(2Q 1j TrH`h 1'%=K:{@?$l%bc9ͦU]4tSH9wSKRn8pT|8haT}7 q-w˖Ք[pjzAW\"ͼN蝵6 x Yiz_.?uV3Zλ=Jj4e]R0{3%RPքZ`bfՠ/ Cm;_~zq7^LJ;CS1){ hڂL}9wpclU˶}7団C"q" ^gӮ(E b_~}u_:"є`UV`/@nGkU-vc]]b7nM3n|ލ^*o/@ltը-R[ r}՛{ q0E|ilSK3ͰfQmQ}ѿ< j-c1SY樛Sܤ E,MF:NcʠNu=.U%=K'&&VQD2qV{tF ",8ZI}Iu^Uw~NzD}`oG)1PP۶xE| _ qpv޻'jkj.r~洨 x:53XR_p2]Gu?@Oչ8X81QZ+Pq.54pqA`vxlA&D9XP.'X6ȹM0:2,"0$w9lVJHH!Mgò: Lw.i§Gvg{B`_ gO즏4 IP3C\іE `fukZL N!rڱ!l5V 1[lC0c;+41$8ю(wI  ]ߒЇOHVي,󦎠{q|t IQL=1eOOo/LMgd: EGj-^*b^sfoঙ7 ZYF ,/:=H'e?,|.盝.E\տ+%Â]`B<UX-zgm+dcZFL&Z iȆDr&sNRy\sl e kgk ^bm (9ByL-CR"jxT~8!,$S9[)!z 66T(U3Jׇr?#<b%Sh>WmoڛmOݑfF7>.GYrB} g#Bp).}n&x#32x5sf"CÿU^seu\;מ`\sn)1֢r)gI-aSQ"ԗ:Ң|fI\1}$HQC! {Qi;Wϊ gK>kSb3;k5Wuy^چ"{"GLV&* -Pj%8FF܈ "*tķ,P-韁ljO#pqW9J%#߈56EdpTNd҇H*jZ~mN1ipGY9 A+CEV ?-Dy=8(Aͭ("8N*~11\7#m4HMRJsu+%Fm HO1oKU/0V~ nt}*WDvd>Zޕz/j-m6m;-=C= w%(baR[Bٖ=ʱ Mb+wA_Ίp|j^t9>&ǥ+#Q P mɑM4QD/¨^Zæ/Po(2:ߦAi$rU}٪BRP^ԁ6t1f} EmEV'2J@F` +},`/RMDn^=W>ߪNSv(fKRS佹_innw=q'AD*ιYR!<PՊ>"?a~nǶhGqrZKg4[fw4\ʢݨ)h4&vY?fG?v.Y?7owe(%DN«=U-8tq/<0cS[3+O)iuK ׀fPYgy:b3^FIryWBxmD|L?(ǥF6Sg_ew}+:F5MkmISkJFUy(gΘc 1Cts0&ѷK-lU q;i) l1\.:e^PwW‹ҰLx "!lv'/-b6ǝl x&=iD)R+E|v_v*Tn 28P`*LVڔ/P7x[DCH<+=!&ji.*`/  F8Bdi߳jeO)ܠ ^:JQf(VKdjl!^'yT"S%OJߺ)6g>Ύʴ6ٳ寶ԫmq.vEkYDz4]8mJw0)? toǴr3x3N+Ŭn7z&kl>>BfHcGPʗϻ~‘7~oP5gϑ~d!b*ڳ}gBS<> !lt2*0 ijo!9z#'HU>&'-@DcR`2&h`|Fڙ\bHQHx|ҡ M ƃCsH;srȄQ(]P&'$JC(5-}+3:?^զP_ؖ{-TJcH|`L+$HU9QkhcJ`w" 1B|worK/)%x4SUR hEqđF%HObpdžp/bo]7^[fcm?3>*H[ǨġqGͤKA3$Y%dNҒqȿFïRM[1R01О ~nIIbQ1RִTv6UlE1!7.;}7?{ԙ&h94y?q7W0v0>5#EϱbØ0$|&UgRfz`}ƕ7].'[3!fx{0;ߡF|6\Hv9mp5s8=0~A]q ?AQ}ܛ?w?O~i]N>x?Ζ F+\?z{uq9^[nvQp3j_lqYՓ+{Uݰh +'Ȉ!s0dMuo ]yu}5Z39^Ytz]5WF%c_p] ?u>~}.|;͜ל;б ӯtp:2U&V v'{qʹ͘<j#szF;߂t͍2F%Rƍ]Ə;jpL??s퟿|~ϔ_ޡ.e4ҵb?x} JuMZdWS^_RYz|P#  77DnodA;+59/q.Y6L5+FOIO9j)-g.̖1[<c.X-#3VOk>bhhbZAjj'MzJ- y1Bڶ<=yaKR܂ tvRc]7H"G:L~/W>,LTxy!LMIu^nSX[.h⁇{WMSQX[_̫}64wbJd릉vMGSu|Y)Kr%B#A'fCrIn.Qi$cY FBFTIp/-yi$vJw4;؁-4(VqK̎: 0 z;ie)$"\&EDТ"(%tuOBR1fFek"1*yj5apMGHcCB#ALҽ|N.>>lmqֺ N%ɳ *+H*cmun<ҼMKߝjk:pgb!Nc|R4how"@4<=l]+^_cNU6F)D;MM. sMY Dy &g F8Ų5]g!^2w^{qZSDZ"Ш#".nGG@p:K$wX'X`OkTHʱz8CQDcivWԯUB ËrjC&FHE=DFtT"(Q$gAO!Z橏7qFԉ"4O/XzoApۉuOu|e2R-/U !Y)CZ ƠSbTrKP~sQ=q}y%y~:U,֑j5ng73}x *;M~ g^,5OfcFs,P%z NBPж'$&D%dˁ^~FKB*FeJpϣB0Hdi/Q"Bx$%voCXcFSB4PT,e ༆(G= WJ`cGZiDګF@%"U-zUb#+upg}[P&[ C$zS_ 'QBKt%Š#6Gm@1J{׽h})DTG~@C,5VSN.r@"-GQ‰]CFR+H 29#Ѹ!0\CrX8E z[9;2 =HFiM0LWSjk=|5Gs^O}}!ygv13s*Z_>QVˎvҍ00E9N ^J=JBi3"8JZ- ^9{jOS0&%(ל[JiJY$qKpA2(,I+Y)jhǤ" !nFl¥, D'("'vq Ί#j\qL0;0.3`iD߱:1S$ZiHHKF| I'I|#w/ d] юJ;Gd7"u28s'2CDoj3|IqpGY9 A+CEV ZB \aP^pGrOMLpmbnowQ 7I)m'KdVp@z1ܺ1oKU/;"wDir|ѳc_1|] ޕŊ^ Z-m(vZzb'K29@DImy h(1vT7qȾ蠅AMYVގmIU7YW{9>Hڐq(m@BlB&2G,on1.N/P5Yxj>)&(AsUbUIPy|}ϣB‚8q[D^CH\kJ=!&ji.*`whH|2 Y]_ 3P6$xEFWlVIB&{哥YiV$XkZhsƜg&b- $$2hQ9pED\"; AJS ĩv!ă2L(M%S,nvgBJV?T1WKޤ @HYYD%tFҀUo{X=G-8FFQ)r șB.T[Rs&4&qE΃%~arvMǧy޴%u@n얮-{ulKue!=;A2O.ƈ+l]?7w9mtf{Dլ;ĖlSkmg)1z^i_twށgvWf!BlɚۦF=-yN}vqޗzdN/3ХoB΅|?FW5ooW[f!ӽCD "g/o:N拦q/865oFOHON@xO87W`"&V\ۺϴeP2AMJjt,Nwv ۄoB5Q)47zٛG]#&ݽɀT1tD98jn>u00ߚ i-J]s5VW<&й}:X_z:-n.'ۮ׀M@r2u3.ICAU*+$(H9wX#+cCiZ37˕l7xYe"haEHxb*'y`NE4) ŅQWpsaRgٮyw敌Fkg:NO'4e4چJST1NPNo=jZukev=/`e{œפ$6@ >ShfeM|bHQHxbҡL(Em18(\8Nj"0224C.&\-rOP%m :^d?ר|U F#BvTECoXzݴQ w>X=ì52 _t&emtEܾY-rk>' 3iO Lak5wʘ:ȩ.:~xu /?;}?o{wF>{gụ;H@d@O͠8^;4װТ講a\eJNcܿ˥qeEFb?Ck_>{L`c>F5cu-S$%Y՝f'כ'P_𳮃?'p gmEx #j .2/S;q2R)}Nӎ+&*e [,Iq(JQ-pЭ~ݑgi%aUCc!:Pq!dL,qץ:$sX8CGXj;/YUBċĞ!Ĕ '{~Qױ"zsHڤsH6`"!<0**]׍8X\ *crC墶 @8e}:˖w gpN ; < HD,eh.UIC3U,j'EZS 1PKwQß3&H9WIZ#UhX]a;9;F\HImC mgzQεI]mo#+?%! >\_g5Wg9_%,j۲ XݦlVSŪz>w,\rۭ¬5 :>A6CvGF[5d 2Z|Or6l;7o=&z8KxD =4ZFjwW 2TXU^G#Nj1EԇgލFׂ7 ^mdu&S{pa'Dz4#_8}laTm<=UZ hk^\mCZ/P{2j (mMa5`@%I9g- ˸ cϐeiZ$5{t䆵ZwY^m3d՘ᄿj5Ӭ׻k-mma{x#Ӻ]Ь 3BDGDpNg:g\g9g\dß3BъcQM=DVZ}Jbő~ & @=gH I@Ycb=-P*{R"(="btG9.Cl:oj=:gqk_v2{f'8ZL*6 U%ɢbh=?OR͢?}]yn0ptz ])w v!6, sewtU 820jv,|HAxK2%8p1C;-u I(*7EcB:P\䡰 |K:]((KQ>U9B&*)$ͷZmwZBSW ƾ*Y\.UH"ӯbթ$I>{]'H2qh+DAoLA>V2kT6J8ȆX2Ap٘_Nqp_vjooǫ{i%a/SpdZ5i'{2}TVe% thr,$t2>M+GbO,Qs>4rt`LV[knCB2hB*OTE哶 p& 5 O99U'x턟}:|=={\﵉$6hq;A}WԹ oMy.Ǔ&@W.+uyW6hj.ꅚ># 4=9vi276e9?dUn}fx_\j# D#J_۠1c@"rH] %7ѺWU3ǿA|=y%(!KAodL"}5^G.hb op"arA8jIiY(Od j_J2DшFsECMg?0&wVjwնMCk缜ck:f-`ПiOn'kfߏq/v_K6Zy8Vp\bsRV ˅!X GJ*!IJCXbʛtm)]3vzVcdkhxMlAp/nx<:>6.3`?d%tʙMb-d|v@_xo빼H7}CVu!E4`1 8[PNh 7M6hCPZ$Aa\tɢsJch& |ot.ӒLah!#}P0}9.6vfWC}{S0-J,hSȂJǨ!Y1$YXEΠ (L9Dor\Al;voOT oXR8R| F THk#\%ɕ2%A,SWI:XĽBm/MI CVEX (gMgK9?",=/ uljPP8,hXϢE Q|/ PX )__*-KS' `"jaRZ%}3`߈J$mh DZAt:n~Rn0k 7 J}̈́49pŦd$ u z^o6)V;5oY˰{ydI=e1,6N<.^U C+r`cl^ k 6|K]dݼNHS7$Ng'!h}vFHuEj.!2| m,Mr<d9K3GFze欍it8f?NY}i}޾vԅ>;qi~ȚwvF?ayqlZWH'Lk:vc ߭/Us~6cQ\ L5̛^D|a 铯RiIv2^(ӜȱZ sqw@oFm-e acҁggRjߓ](^nd|yӶW9pCޗlZ 0Yzހ4}(ABle}BPНbz~k6b8cry$$i-P2LI3R)6Qe^)ԛ!Y_I9K9K~$WJP6%I6UX5@&Py\0^xºn9__1&™ĖgdIm߻7~:  ^F LP5O3XϽ)*+(+mpM ҰF k1[btLi?JEw(sc^ųq^ZS7Wk'jDNGӯrFޓqdWxuȇ `c 0SLRv}lR&)(QrD~("%}7ca|e}uWM.+{˓Zޒ2]Pcd=899Ssܜ'H/NT͵o-IqzBJz`r6 l Bߞ P".$1gqR +ŝe8kd  <Je8hOY)=8i% %b$/j 㠠\rQ˖_xO׃P@.rۧ,M}9ekIwn< ^W?}7 ?/ [!yYcFY v {P ^mS? o}?sZLccl;C"(Pإ@ >~IVtoϣ??nL8J+3[ɬhDdIN2)'LX9@uo F,E2#0&dN6IuQKBj$fro3),l騀)JI_$D^΅$̃S !'s\Xg̅!!omnvS;OW Q1x,G)vB˧Q/QZoCnݖ 罨q!NKjR^,5ͳax-k{E_wo.'ӡa3C+\ל`ٕ}{D*)>s6}YsY@uPPyC:N[*GTЯ7?RZ2nS{O,i eg}z752!MCoN, v|4 %d. & ă25H2 ,Z#ՁHOE<؟֖H"IHeW7 |}^+}5[zvBfۃj3h{ _m4 "-ÛěCe{N{yr?߮*>njXZV71buHHW5,eL- -*v2+ݴbsm^ E%h\fGj{M:*wLTnQUxx_XqCR&I$^%5ck(aY1yY*𧓞80]9ߜ>RDݺ94EI!&BjnyڀQRIF]RQ  j_ThBМj=76p I@e⩣FksNP~ੴVl9e*tD&nimLj!fDRROq ~W%r cY F:6UZTIp/ᝨuT0砹;؁RvZKy)VqKώ: 0 z;ie)$"\&EDIE@ }LeD,qϏg2Ľ65H 4"&Goyq=˧_ SY)ٸɸ>yVQXxV\Ye+ IK FH ̉/3~?EO˫﷫Y(\p8j|}3L΅(EbhmruOKY"!(O'cX֡Bsp_G%&a4*mx;V8j!XM D`jPBcHG%"E2>9~4 2O}x\kTHN)ٿIdlFx62_\hl2n.m_z䤎oBW>rcz:f<햰ן(Un-IH P讥q 2`xI2Ƥ #wlepU !hS'zP!!N AR-lB"U2[fʸ^JyB+-di!hDll^s2fKߌ}xOTv02'/bqh 7AFsLP *&FI L#xKdKƖgut%D%0, wZP@u 2y"SK6"!~a]H"*3_v8 1)!RQI*ʃ2"༆(G= JXCô:L;$=k ;Rl(0-;QZճp'uTlхw9޵OoÏ/;2#m00z&Ą7\hIy!CϓloYf^ݹ^Xx8 tͱ-Y܂wӈz@rFsqQ!9dp.K(SMH+VDiKPMFWT %(Xdp*T^ka`E2N?Pyؑ:kH 1PKwQOޡ@(< KZ#U!2 ^v]6wn][o7+S1/c0; ݷ^mź%9qcuԒRC"b;sQ ^1!ЛMHH[It{W姩a) X.XN۝/:Yq߾rEX+dd4*aIT"KBB!ZFH-T[-`="&׫} p ϟπYk\oz)g#s'Sڈ;ʶvpiHhn}S1Ѷ)kʈD m Tڟ+"VazHE%͉#ST-9kRBZHHǖ>f3ٱ,].tD*i`jvqA- n?^C;O 7c6kuqt_D&J$j٤$IN)eYt½׸KY +~&$/M` />PZM |V9 h4IUn4rKZ }ԁjS>ż,ϰ_1R(ݳYYu _\)[ d!ʆ˝M7=ZV~.yD~2ɘ= íiTpޫ)KRZrN(Q7I's )pکhق|$/0lͥ0 #3 &kTfJVrl 99;t+8n\.Nӧ{D`Ro8.j6E$T*q F8OdϾX-1~;!sUf\Y6&mXȚ )u+-x-/Ud;r,rv<6LrDbwWSY? 0 Fwݯ7O2ݘf6f?!Fvloo|+#7Coywv]x~s1 c{俲 k.C~ZyP~y7VM|sƌ%N.񓓾'']+C{KWi5fmH~tu_Y2'6,2fiOUFM>f M%6P(QuN->Xq2oίoe-& TszCm/ Q\ !AVACx2O{_(= =KE~|3)|yw3F )D#0 ]dMqZ >Tט'a앍$S&kz#tJƴZA*!X\2\%'%*#tLzUfR zP8N@%yszPM{j:*M*:u(x k椈oCFJ,N%wq?vSG0*zv6*. ii9%X56_s= 'IB=mE㟓޿ΥxZ7@w^}kjHüt,,wTVR(o ӲÞYP\rA *r+甃)*8i}\޲Ý:nDjr&W27Xyk *T V5 I|Z69Oˠg-5[8%8wkkAxt@wQxНyt*Wk㙔V%dJZV`n3\s:gΓ" p/Vӟ"\%`R/u> ;Q0Z,r{_LUE:~•؊S\r~9,%P5%?xon!^Owv chF[ +)we)zoLj1 K"D.D-:,(SC?ik Q2jr~ ;|)Gu{1bEX}>9\-!_eooMގ_V.OƵdӸNӫ!WNˀᐦڽ^;e+zCqUYeLrsr/ m䱀o^뷷8,ߡ6zˆ53)j% xI._B.{=Iɜs #zz}`ⅅbc?L0 k L6Ȥ Ov5v'r$k;k2(7icr'M↏G쯶߼g^T4w=Ӵ21{&aa>V?qoxp>%llZ 3I˺S M1y7]آl(Yc1_;Bc݈Lk5= hXe#=\)jNGMC ƪ>!5d-h2%?QycZ'lˀ'@!cm)hdF4^ ju=*HG'2$EMe\1jA%KNdSNfK4iXه;6\K-yB11Zj.[ŧ!^՘U#dit2ő7ER(BT!`- v2$JBŒX YM{cRDG%QCeg۵g%% vTŧtP>#X|Ec}\"~w9HlR%V*JuH9Q`JRo[TYTk䪗1(!; :d(yL}4-^JNh )p&qu~JAu I!j=\K7c+lLj0)DBQ+59KDs2IecvًLBzTtJIdX&!)Q-4[YC.$$7 ]t\$lȃ @l;+ &D: p{(.xu(+춇)$2.KV^ˆ'h1%a[0/f ,Y9e`m+.U`;0` Fe@d^56ܶc(Ecˣc`mV uTm ŐZHՌ9&b .@0TLq0J5̹(`@*RkrΡd ( dJAJѳ؊}lR5Y ] * +#IPe9Fv*X{.+ r*=``j8MiKh\={ * ٻ6cWؑ9m`?l75~Z%R!x{zEqHQY]6 ALMOu:4<126MGEVdP?F}`DU[6LJ4pۄl2$  `vn7|jy!17م}eFa{;K8ou3p^ @n5ȻK |5֛b^fR"Hʀvw :Jhpl2Zep s^ B&De*X+f9Zf8-oW9>0 (ALGF_YcMm\ 0,2Q` ?x"ZD 6ݑȈTED;E!L03s 飇;g^0 kӉ`TA5>Ǡ ~ L2Ckˢ9S45Ypx U,EEi,UzMw adr؀ D|#&eP0c m0Zw=Ⱥnu:a:oR٪]ǣuL.RF4qtY jh&!nc`]ӿk;0 (kX.rpHǐ]֎=ܡgoC[o+̰ )s8)^bL΀RGg}h|(;=4zk$эSb .itw0R :{$ /P`=Fi'O H9׀lY_^IuuЈ |?V+HULY3WȓKug"пƐ$i<2aF9_V((iQ@aaT>)<$#6{kj:@>E@2=Bm!*G˒90ltƲ 蹞nVV 5r^G=8ËN I2ɴ kµ-[Kq9(cƊ2?IueMG T)GD<8F6P4#A:dDj]0^dD҉.SLЗY2^s>jhctݫϺ|U>vb":|0:z JI-|8 G ;t!Œ} hz L(X:,ukUVW$<靊nM͘ՀD(*?׏dz5fz"U U *U8zjSo]{{pjӅcӻRx_QG >@q"e[ j׷ZY#J'd9Q⚧Fb:rYY6ޖr'r"ŜXMmG>2{Ѹ7s?rKչ;'nm~]#v^Œa0@a[S_p(Uˡ^Zn]]a]]⢻\ԱN%W31;cY13Ѥko%-^gܞ>;{'aܡu(84JsNu6 NpʮGW_6FL 94ˮх hף@{ulPLG泴 (l,Rdk[jA#ۑZ&Ñ[{!zޜBh~lw7;4uќVT?|ெq>"\(3"l0U& d./'K&.?(s0_ݝrYi2_ly5C3l$e+ w@Kg38d'B+MŸ_Lg'Vcߦf)YsKޥRW[+fWk40-Ofz[(f|7n0Jڅzp@h%7˿k#ы܊I%mOgӓpp>ZuգMҿ0VaI7PռzكeUu?Ϗ|jv:8bi^s~ټmi۵"ǫY̍V{F(ƺdL睔~ۈ nufY^#41 -XZ7^ g;Uw]>d]Ut*8ZORx۷4yZUn?/jJe͍bѕeO. ,.ey&is[ߍ'<Q5ʺ)?wkqb /I~/埾s۷o\/X;Y N:Lh~~%pZn-o-ǭuOh{ܷ}>r}ۼMom1]­CcgtMlɓ(ݜ)}p٭Li8UspV[{vK0h6ۮgq:dd10#F Cc:''nz9I0^6@zP\s+T^V4IyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyIyU^昔apÎGyq4ˀ&L~uYf!/Ayy /@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 3 &мbT՛Tz඲~YRTv7U.qtrq^ Fj^P9y/ĿH3tꙻqޫa) 9#+ʲ6KA2 0uOU?c[Izj_Ѣ uU-o4*xuϛ?M>oGWW粿wա\uߣzO{TOp܃u~/h5L RٿuXS >IrO}ow{ZZǷn -y5jko >/|6TVs>룮vxnb=bՍɋjt:Maf52By]햻!wBY,vu3o_K@f:wyZSx*q~ȢcjrS4"sJ Q6,5Nr ,K$!H=ٛ9hzWGU;Nʸ&U[?=$!Crq7G&atw%M[\sOb[?*}}Qyхh˲)3wtwS[ߨ:*XSʍ${}t{3gd܍R%X)>i,\YTmvU6 >n`)ɫ͇;/;'TEp^y`* Elz#L(Wac1cP6)[GN1r Ł ò\! T Ԡ3B-0æ@'Ǭ $vgϨI{lXF=zSaO)VCۯ5hCެ(.&'zɽW-;hJOZ0qL¸sD¸0ƅ|¸FXfHKƵG¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$K¸$ n[duo_2*kk!8ZI a@GD\ T'A b\s;i[j֗=@oE+-q0:F LumߓH{dK(PvP{XYg<(/t\F8cfZPxL5: "BW"'WBJ@Bצ݀mZfgg|w<]-Thcְ['1d/X{~溏y>ְ8J+?u@݇zU6ŤJ>>>2StQTA>>>3>-iRlLs=IyO}RX)M[j*S25*S25*S25*S25*S.1/qtrqzyXVڼP 9k/*|4MŒl\csDB_6W1eY)˜1zxoYf_vy46R'ELJܝڕvPQ+A=@};["_ '?ĥ_X|3Gfp{VsґX eZ[,.5Nr@ EkI=fwv 2uUGfs[G[l[y^ug#Qzr< % Xv/9'g_5ܼ؞+it/e?(kƲ)tw]mo[7+Bl9÷a|&ٶ@شXbEJvwxbٱd:HPgf8C>3c\mJ)VN!\J,N5c1;Ԍsfn<.3n c].S]켞Z_Oa unpY<6/AGɟ 6 cY Oe}JXHJϮ m+-Y^cu%ɢx JjhT-Cd%[ *3鈖銜5vӚsy(Z38-.TUHb畍6P:̖@"Hde]Q}rzeJ;Qˬ#T++ƂLVaRmb&TVz *ֈ.dlM/iv%b"yVE?˟F&/[p0`%SkuĒ,B;V'OugAx S- 9?~}`:97M;xGm f7\wb-*Jy Q̈́MC.hڭrÕUWrM{am=y.(ޅ >(bVE!)*녨$af:/O{9;%9&A{-{ lR2x ʙKk3rޡ+E!l;!MX4)Ar K硳Yg(g\L6F!K@+"JdI6 ҬgɐQI*F^?_(?*} َPJZ ͶF ğ]Y_eHAVgV<~R'BXb'd"@6Q /Dv+QG` YԘ^qt8fPt-tu nf xmZXKgʱx z-CQ&ܼ+& r"F֓4A_OD:<aΧ>X_2j+6F- ;{-9`ߎLDV;d߰a.0$Ȇ=4elѝxNCn+бtxz>k0#nҴM%Z\nDആD, =@6B#ʑ[ Sxr2 ȭT>Gט#3J^pԂE3?NaaD&T<#^nx&d,c^~21 =c.v_}*v[kaQ~zl*{h.%[[MC2ffgǫy㻆;8g͇05'uRQ5gQCy^kF/{<?FJiOA6|I{Y$ݾMz.Y 1V%EjժDuvG-spgn1。,X(m;wEsN,tdEʛ1׭Х6=G)و(P1": AsrNSH l(69:Ѕ1Lv9>Lɻ6XP4,ӨD53rnk/@YلY={ۖzbg_{YJ$#UUrUj ~Tt}#Bu~֚ u(OE>( 6R+l/ M jJjPQdPw{LtL}ސ4=R:#&$|)%,Ȭ] Eq+QXi1|2L.(]r&6 rDе9;jͱh?)}avcbHu$P(^ "%䭘w=TPBuIw++„(5|΀(TߚZʚN(v}ېDR]GuEΎ DG뤴rEYٺ!iT)F䋮,|*fK )llk4KB],901 -T|9` OHԭlˉM`k&谨\Ti/c$cRZ 6Il4[Y@*P˶ښȞ1B^IƦ9ֲ;je),zUo/t{a$q&4)s4Őo6hzہXnU@`cZǠm5&62ƐDM]:=*[IP p9@}ʀE*Y%/;\D3rv*jPJ~~ -Y=m]wW7ȭ&]nMWCh{׵w4;b2o,ƻJCínοn歷w9Ff!ln[Ozh|s =ŜwJ_E7t@^]`]YoqTWrrDޞ~#`h}Zkr2d}77v: '=%+BCB&+഻r= sH'p\EҞd tt(;S ɻ . 2^y4,]T^,U&ӥѷ6+o7TSs x^6QOm=&!Pۍ<º[c*0)K?ΥLh\*K S=ΥR)s \Ĺ_ex׎ZtBFY@MSClm6U SR*wg:{ayTUD+ is%I5ԺD: ?WN+g'+bC(999Z.y,\O1ĄH)2;G^x]D\i ]T@<_(ԫ@gWIh%*KhU( '[gAF*V}}<$Q .Tp1[Ja:E&$l.. #L5x.*-a=? \|s =[lQ"Pƀ(-ai]/H m*"v/eFw-_(&ae{G$sh pG-h5{s*<ͬIߔő}]mTavci= =#-Ғ'בzpřzg22=kl߇ǣ+GyųO2"{ŜU 9, F~uzrnEn 2o۩ɛEy|z:?O|iQwԷdJϖ-k5ӧuxJHU(5Qܩөjڦ4 Se_.iz0o^oE'???{qv 1d p\FÓbmƿjqGyL i$#)}H7 !PYgW,)'Xwxqг1g[&GkGE}$7iX/MV0CZKzp6\Vwmo%߅s~4H˯ x:p6.6Q.^41S}2mXAV<#L7Wk- ǃ8<}B_~?Ϳ~7߿|<7~¿{9˱VD@o{{}`ۃO/yВCNgXƵ)oWǕf^7O !!ngl:nd}NgucOEsfT3)e*Ӫv@1lkOͦo 3y)+DĻ:va<)G)_S0lLcXàX>`$5)ӑ{?Yj¦ ɲ<= <{~K(M.X ř,h ? CDv )I2V uKY>;?ݓ}7[{0O\:;On@Kو2R\LzQ Cj኷k0~~÷fbjqUe,lB"ۈ,`M]<{b{-RcVX5j 1XCWQ!T >K`j.0c,]ᆯ>c5mW0CDŧ9ˏH9ohz%l wY:3C~ =y<9BtD]t*bj̈K`@dSi2*,uByyZiTr"+`ަqx#y)svNIm BQ"45X8⣳4j(("2 I &{#V`'CV- x)S9v9pgJ=aVhye"᡿7Aoxqcqm±RQU-1?.aRUeԫj9ֈRU j9sk @B칱 _QkĖ&uDVhQSE&zG+2ԡ:Q=W*D}k|F>հufO>7!\K/߯\ oD sܗ{`>4aNgp .s^[ 1kW2詵oϽ0ok B8 Sa[ #K3}0Mk\2fPB߮bRb*[ex͏58DTʪ5QCq*iqM`,akQ`7DdEV9,ROc KI S"!&"Pn,,xGI4 G2{ͳkL_]}o)>.rm-!Ft0b%ebY+냉KM< 1h#2VMq7.:ŧ*f hzO+r՞P+o}IJu#?5Wen ݜ$c@ s>2/5azOkH 01^DT% \p;"eBbennR&wmFYކ=~;<1ec4Qu${aBSqS JBJWcJ0,A2TʠRT*JePp(2T)qRT*JTʠB0KeP *DQ*JeP *A˕ʠRT*JeP *A2T-TʠRT*JeP *A2Tʠi’1Daxg5.7՞+Lw K+振pJGZ[)nO?d":>Dt|rh1L0 };肤8˜=she'IgAzw5\KLf4NQ^")9IE$s2JŠJ,ZI  B' Ll6OcPWƅXAϬvAJU"ՑTj f>$HrIb"h$`U8_?]M[RTFb3F,a!5)$)bRs*Jsct!3;\psηvt5Y/tvϜDž r`c=?vI>5{#rF}"0 I:5S)C}k\䃩;{2"Пo"cY2&N=#V` $"ٓ[tq% S.0(85iatjpCz麏_1aQu`|Sw)4DͶ6 @V\ ÏT9L~,irwg|{r3Ÿ+4~]O>x7FIgܘ~ugK9%F޿zwm[PJJnږ!j]fY!4< N0b~azݛhA~&Vm+?Iw݌99:}ޭ] >LC6:yn _vÛqnqGA@BaJ L ]] \5v_`Y顄H0ú^_9|1Q/^Fh {&,ҰtiK󬻞Js-cy]*n|B)m~S49Q[Bm6Ң6!GвΖԩjcܚ&3IFr|I5ji)jX򭎒o<3=jRr+u*ίF1C2+mRCe23Ń^+RvS. mT*gr}!J⣳4jh#<"7 FF+G!5*Cz Y@ c.*5Xbllh!OTʔpn|{M},ٲ['hԡ±hGҎTXS$eNESS̩xs*$ܔƜ>H3{npЯ¨5QICb:Rj+f#JՓ&zGŐǐk/ZXSK)=8d%";KqH>p:'0Tr}_1 |#RlIeྤS&_t:߆Cn |wkNoYЦa.v툑s{jg5y ᜠ] b0TSLi-˕V_> VQەBBU,ZYJL%0"~ lqJ3 j(N` l%Ls:,T}(]ft8Vq򌍣Cf\{ۮǛo^]Ox޶qײK)׊qJB50FFM3QdX{ f!މD&iI-oy.rl|-h 7GoQ+,'iCKQhOPbU_00u49,㏪)U[J.㺺Uvr10PӨv!]Ynu '1beP/]"ǿlB_&& r B($<3c.? aJK+O)}Qv#Z_^$tZ~*Ѷɱ͛[Hb29N(a[]KV:/aN(E+fB!t(~xB/okYxGoL - K(jhJ3`UkLFdK岺)8"6eȰ,’Z)"VD hf2:!=Ż[v!7'?)[G*LY-sGch#߯/R=v=͠Xm ^T FX QǽE9z3vJV:̇N@Sv%t+A[" a ppEE >"N"Jĝ@fCW R8)I+[S\W)W3lh=v=U>y$xgl>p&#A+}$(UxyZP&nzpr2~LO]qdBuё sK*"WBJ#bb$o0о}npצn:KoL8sZ=p-Ƴ>LG ԰aj36K1{-#chnD<;7^J ʗoFɧS|{YVy\sl e kgk*$ P-!2IMV\`HjF`D <* s~]`FʩR ˽-Go $MpDh&jdZ/“[/ϫV9Z/]GÇfBKoZ4'Gc2 9IlSn){z\"Y*SNBQIjJ&-1F-gzGg.8!J湡ZD\(6!| N9ť@rbdF/c|f3fg3g\fÌ]y! q3Fw)Rb<$6o>S?nvoL7uwαe@B쩀%ʲ8GԀ饬9L^]DNT?{׍Al%Q!t-ŦA3&{Kyxؖqrϑ%J~(ReORd-Hh0LVd {\2:Sbfb,>^nE;rWv3-= ؍[cEH!@*.k R+f Ǎ )ۆm4 [Ȍ ي*[5)hdI\k $"lS97ި_$qW4b3FT54[(+G X֬Q+ƂJe*  #0-k۞~Av^Ҟ=P,okyz{uIts"h{m ${B8r4*=7V,Yc \Eiu)"bI;Ir׮5XkDԚ(»[ dh[Kf[(M 돤f<"J%bM?VFC"Sڠ)PV*)8VdRZ&ؠ17V#0]2"܂u.d-*4>e\;PqxO]yWŚI\Wr>C=~ݩVn3ơ*Mq*T 6yJlPFBP  eVbL Lry]ϯO9{NJr,MVq 9B).xZ*/Vg!243/WV3B;I,N(,PP-:JY6r֌-嬗쒘3&F_B GEI8**Y%,$l,Yϒ!A "ƍQg # -Ez =/x_c0De)#ol2?>fHA PԊ4IK,Y(Yb&(T"@6Ӄ"TB1 m6 G1ͧg'Ҍ'"\ oFlh(b-))7k sn$L䲕2:)HCv6]-/zgnJ4c; 45[c۠\Ysm;2Qک}Zͻܒ X!`(;G;XSٌC*X:[tMpnMEͻO&hs$J$j" a6l.Di|$k#ؙGNv+rЏdvG*w;*ح$kIdΟ!b{H#q|YC9Vk"btrrPѹd5"P9Dld'!Q[oK(O^<\ 8x+wYB(_1f%U2z-K( uYG}ZKO\mBQ[`>h,0 @ײ;xBW >rO N=LW *U0sߟl 1JW 1EwF $f(>Ľ qorAB{^$]t-lĠDc )IfWx#m}Lέ;o_ֈ=m]+qNFҦc=݉vkidT+ u?ں=/NmnNq5e-uv^͝ύ-z^ijo7; oU%x;yyV:4 ~%k.hnMw͗<~y(szmp%>+6*1o}g' Lw:ѝb.=m t{(#wHB0@L& UvH˶>)dqNaۚbMۤxEҞ*i!.eg t.$UZqZ*fl.wL~zx~+wf8u "/ SOl0LUr3L`fVÀWJQ Z6Fm?z;1z2~2{6~ F%a49>5M,`wt3 =aPWqtw7je:gWr%>#J-®W*#SJZ Fz2ꪒquU%RoQ],\TUVTշr+ⴟymƱ7|UeʟfFϯ?D/o^>ʨX":.$z|mV\/Z/R5xLXk.&VX(UvVP.Ipc9`})t%.Ay ~>,a&4 G4xci( 1gc^l?<}&_KO1{IK&z"d\-"%iT5'%O%:WsA!H3L'e)@ڐ'DBaU!y]I$'5i {tASpxQ9lKjH.IĒχPxrGƛﭮD舙$&)2;e BK^x]ȨĠ\;(&Kk*-IK zMlr!{l`7qYNI͓r._/۰b6 Ja: MHm.. \(BxMa8ΥTru~=)?(щ|xe_.~k/ x(*gt͍jC.:5^Z|S=o!yJ͜œG5nuUpv 1dQ/9G`aʝ튣Y跳Ӿh% n$#~uȵðZ;*eALQuN~|ty1LJ Y]kGbIu\%8R9ԡRF}zuތDU}<` C<Ydd(WF<2.λXGi \ ib$ MYQ ||{0:E^ҮS)ЩaNǬ(O>_߾?~|$¿w9ͱVu[C$>0݃+C퇖jho1"bhtb\ZNy͸<2dԫb q7 /c= Mz)Wm6}=^lNuǥ@TwZ"{F_n@ vu߯;`LU) JGk?P8ʔw)߶Q')( )貵^(05Fi,ŏҡVqޑ&zQ.yH` 4O_g;yM.(3YY\1T-#(TRg1/3puCi>=;'ݷni#bkXɐutyf &Qב|$1pQ.Cy,_~i bii,V9OЇ.d]0t 䐆$S؃f6.3kk aJ<2TqZ j]R e@E 8)C^D!. PTeKF"B&92JBcԌ-r \RHTAZnHHnI.'X={Xi,Nqzs &v>ׂ+l_is# v~6 (/1 Qu(QNgzɑ_1깳 d>>`0ϲT[]^`Jrɇi+ʇ]J*323/q|k˽ (T .VVyzڀt,OOx]_"܇g~<9m~Ƚ> lzwOӋ/ Jħv3v6_jAq0*>ڶܺ㯵P@[kzllGu$^ɗrCFz#aU!-N#P C-Tw+`уOJB v:k9`tXL0YER RuLUR^"ИSTxe0T3rvX'.U-vpiz8cUg\f#;~ӟٝ_*4`d`r9UTB)=TRr*%J6%z{'ӧ~}Ɏ~Ycm/f>_Z].]iHd^Ed!c3S *56!/ )u )l('0Qgg}f榺vD]OC4x]umS%BSʠ#UYIXVVI^y}G9X"j}LKFҾ&c,\tp]Z2Lc7|6}6hkk ቃK2t)׊O#JL2A9m+$V:kc m Jcc[PHP xfW束sɳ'S}%=Fg:I:"tى?"q3pWjm~Wimڷ$ jmHy5910B}1mK(Li|Ni SD:ή /8"E` UZ皑;>P MâZoi/+M%Fp>G>Ρ͡OOR'N,7s٢ d:-f T:Ú3 urj:gh u[8(*/X!B!$ *`},$,-]TXV6A{zHᛛmĜwk㭫>$5ܵl=#<4! >]|Z?qy铜7@RK ;Ģv"+8gD;4ϧquWYfXie>x߸9i*_c?ۜ}FI9R!ATIKNyAf}>7N~?}d&vWdKNp1F;žY^ ;$ŠΌEa0,Ŭ )[C+e/#ČJA`x-㬏j eΧ~V_%|*YV;3̓W}_8n<κL}f|\+[>M{! Yjŝ mЦ ht*EQsWYj;Ȑ-)PͳTHV=9R$mSaK>'*ɀQ@/Zkf֌Qʑ*1qơP6օ0o pW}u]cWUoq9釿X|~5(@S`*`A-LbeQR]/ KP,jlE ?$ A*Y&h0LVd {\2:Sbfb,ޗiEn.i͹nCr#I71Z^$xZjZ bY=Ec 2#C@ʖuM"%EFRuDd}JX6#g>U|UCшPkD5jQ#nx̐Ee(2\<:B,H+TrIWe]SӁg \1TLE3!H5`Uٓhֺ6ֈc*\Cu6Cnqԋ^:\y{3&6V˷tާP:lҘHn1@|Pu~U2RRD3^2 r9]Uq--Q%0ڐyXr B1ˌ؈5%JBԞlHNR]6hZdbbĠ)HNJ:4*`u9{$ǰPmk;w}9/ycG%f EW%3XFzBA-7[c$a с5K_#" #"L,wuԵ4coK=roG&JV;5oX˰{[d4U eGh'w`lP8Pcz^ k6܍J-K7f@桬d#% 9e^S{:Y骼1bw5).Y 1,;T"uj"Ç$z=z3y I< n-WDQ`H<EJ*ELrdq3C{,?,8jypaO',>soN'iy׾PbuՋZ=YQwgB8/OzG'{ }e+9䖕˳->[Ծ೏g~YyM4nSQZt%a2#tViI(]ܮ8b,rmX2BY:'g I =HФPL}HEhguӍ2d *(-ꂹ(/*YPH2{R(;R}R㍏f٣w|q;4猲`Grv~$m7-x ;:}uXy]/Mi[MEMdU1c`J9†¿ȪUJс.w&;k+t] X(LYi(BUhuٌ$zr<*Ygl_ڧXo+@qa{U 4L[t%Se@w6uTݬEHc{`@~OɐĨHCX|[dOd 9 _ 5(dDV:de]@1lI9 gFD$5lE':brop8yl?D#EE%WY܃V"Et[Y&DIjkٻ H -5TjmY!YWwEjI9{MCQR$)uBY4$; Ұ4"-1(|ѵϮOIYŒ);$&_|CnB7!ԕ#X,"P I}HT#QMI{/'6}#v аP>]??]͞><.pc3ˇm]z@|'my|JOn燜yŷ܆[} w <ϼu=aYo9W[bG۞߭>󔙇۷2sNuGth3 ks}//RtJ1*;rکt~N:;@ةt~N#s&#FhuhX }ql`9$Ã'$2)YhI@ZXIso_ۯP D<}Dv-zeRՆwZkUz_cڻ!:!꽽k){-9>s+b nD>16O9WՆw3$s07v͋wms.MV)ƭyKUcZ{zpU17̗W/whīpHn/]rF͔f>ڻ ~iD9/=o{OxE'.X3rFX%l0B t` ?y`Ӏ_ݺ]R턦/gvC6oi ӄ^/of;;;^t<.6p 0oz(ǜ7mFJR+tm ̄1VӑRtFפ%ޥm3D)]:eB:t|+4wuߚ6 I4otyW ?8A\0鍡ZupDKIvgoTPrJz֒ Qh&DHmP_ ݂ۼ+ҢR$y1\.%1 S'd*a]b'Ȗ[/A燫&>3\G& W{QK+\z!*U&C-rpltp*Sbqk;kL7+ysVy÷7W?m q(!h:{4jpsnnJD_>ofz>LzP{ob>/=Y52@=7e7|yoضl0d{C*1]:g5 f*Y'3&ww @ߨWq+hM;|9@raGlHVQcvy~ID1bg[q8QCQO.,bɭ%tOn)Ho,[ Ut_QwPSz0et7ld7B(kn!K@;sXm\ g+d`skHds)aKacM; C[OSo&.zAkbo0Y6J'U<{k\ue7mlo^ Go$δP&[S:ևk'G^)ɢ5'o+bes;Jc#I}jJz2-|_xҍr3VP>xEQب+.ܳp{SOMEKi6.q\P{#+5W9h ǻuq .yQ=#Y$EI9'!9BH,BU6%UKG6`Bʘ$!JʘĉL<+#js J*`nPVQ蜺@ú@l։W̻4Dzߞ"*Ob\1i5P9^ v>LO?.<%(|0Z!3Rw%G% sP *y xeL*f! 'YN [44dR %Bga}FF꾽Ƨ6ߝT9,k$Mr[=4>: dʟ t&s:;sW@?w90wtQ#8Ivؕ3{Rwx5V>mDEEU6F)ՠ&&8a.x!GB`T",bY#Tʽl嶮~l(F[J1q[.WZDZE Qi1ށRYT(:wj0@AE}A F:"Q(Q$俅h'>rr(Lkt+N)X #gƷLU(?/}9ҧ(|Od&# 1G†m/w&(ɠ}Jȓ:5Vc|x0,X{`kUQT~)I+zw-S\9!SJ8 }1)e9-"*\4J@E^{-TH(rƠSV dmB*ٍۭgbq(XH:,<).)s~g/ד4gB}ɗ&@eѧܷ+GlAJV?O\ $)1v.gޔElY s2a$/<(jJ6rG$FqRFl7sy.];iڝCoxhis$$A*%T0❒%J#*S'!N CFːp= &DF=ZR1 ? ѩFbک}q?g; 炈PDdu!VtQ@1OB"Ȥ=H4I(:xE@"h4og$ 8{%Q,jѓ\1naD,FnD\f:b\r(.¸;\p!p bYcX/i|_u.,^)O"AHjV >hBSTѲE+V6g2y~v{OuԭT>6vIiR*ωJw К+H7? z)σ)/Rer@\JkQNpǭTF|bHA =~qK?ōW=2}_Nmi?jlxc^ud<]*7ER\uau-Szkc`z)Vg^`ɨTҺb[}N^jOR0(MJcIJYR;EYH)h jhG"nF64 A'].(gi)g\xLͨ~ Yj#=LtIZjn"*t;()u ~f\Z13#DRF[}\$NG4~DfeAљZL-?t~R'-E`4\ X#aBzWK,@4!9Ũ>Vgs(|?Ɗ^b-b6F"}(Cn $JXԖ@Sr4l19T:Ha !IV6#UHȺysUҍ(!P ]3%(ͩ9YM'2xs=A*b;W%i(+5PnEWZ(48 u^r%Y˴&wA0%8ſy&X'-zOiu:{ }=ʦOh48= 5߳]v&3T-`C/,k\JSL-1T6S)xTl*+=Eﭿefo8y^jL!10Uψx8v, V^)7lfvtm:e 8:Yoqj^\rRHYmRRSt1xS C`)[O@rziF TDWYoWJ٫対ͺui@j7fgֹf_=a7OV|Q;8"(0 67@U6k]Yn.y@z1{Q% b` KR"FM9d+Ua(m$Ss@)>dä(#zR.9e¡ѐ%HvEχדZb8H7 oyl?EE''>>bw{ЪL j鐬abRځ"DVhI鸨T -e5xW3 vFΖf䝄tF5x+;07D4`NIHHF$S5M 1*^Xu Y!s&9II֐5fg\r:JZR'+ސt>f UԤZDFQ"g>ˇ?1W-g)`5W]q-ﮗnj= za9xOLeMQ,=?GQj}`P`X*BN9 ޗ\x-z |~+dM2EVpS CA+o"}u,5Op2˾ )`V:,Y'Y$竛QrEx@q_ˏSbFeT)zzǙu1TȯRd "qs@[gauⓩƫdMaolsn[0!0`(3"Ԁ$")ϟ{P)JcI49Z#:3R{kA뭓KQ<,Gw c'Xsho<׃aZMZDNmBe\Xf6ҧڨ22 fryhr?imO[<#1x?-)5X:VqᵇKq|jRR089\x!YԷĜ/ gӵUnm EF\_yߝ{bHέ #Vrk0=3b.VF U'hx3_ݘ՚9+G~$Wj cҭpCk>rQ`2 Tij~SC~0Gy'G<4#`rf] ޳948 _OXQӟcyt1YStjxߴh( .ϘI~~?}%~O^JIW޳Is  ~~hC+[ ;l.bܿ/3܊i@`]z5^е:sg$_lN5DGRTvZqF_?\u1g]TY$m=d=e~&g?_\\)'ѷԁ!bmJ FdHu JJǚA4uŭ#sIx̮`6MQmQ2t "ಪ`!&)_B=v'Hݧ\EhIǀ~Lּ2!?hSUe؊H5Ņ&tJB5 Ib; VU35>c WȔ""PZ AL )t dVYپ=k@: ꮋ;5ѝXǜ./=-WAl.н8~]8x8hOk+W R\5_.|c6I4u7MU% ,Mo^o Ao,~up_m8Ћm  虩2y-+ rr R_[lf3=H>ybV!Q[q 4F uUj_ѐ<@d\MA\ m c^{]K}tj5Is sYj@^lmQ+0:@ MKN ( :wFf9a>Z>"SsS8fQ+nRNY_4>tr#RH QTEupS5 L9{=v¼3tf&<lp K|C޺uHr4֯pfZЃ&5!*)LYDQt|>%ZT)x4t*L^Ť)B 1ۊ N"xi0K$v!l VG )b.T^1'>? 0Iw;#g q aM{Pj'szT:xy5o>ӠF}V{yU6}_߮0r:t ΖH+*!DI>u8!R5DAΙ&0D؉D{yI0^0|+!:7ңh%z>De !ho(m:1Uu5;p@"duYN?Oԧ2:rw n6jj/c/|PBa 2DkAj/1rɩ$LH%O7b|u{RIސr0϶xyd"ܕjOIxeY%O'uߟ.5dϒs=kXy''RٓxTJ;%&^ :ȨzM;?-އ+xv[p`hQ_q3r^BW5z?հ~bܣ,IĨYPQ*1BRQX-qiAcS/ƻKe2ඪdbw'7rg?WTCiGD4/ba&vL\.P~ͽsѦn p9|`p$ѠTýFQ^{RY eakmG4GtD79 KSACBj0 D}C!&hÿT' fJ!_`($뭾5 EE1i"Fsjm!V;(pǽTC VI/sP2~\M+Ȭϛ{%El#8{'\^6iץY.[0yӀs bCF&uu3ݧ)[EҔYqے(PkTPE;!#P>PadzKԩNi-k֥% :֢PB!ud9bXk]gl-K7rJk'>8xHPS= h't}hrҾZ|vXMƿ.2i4P'LaəJ:{Co3ѓr4ѓVlGDK(֪ Q(C@(Lr YRH e|Rs;A,{2__}VTu/~=}=.J|sOJlzȏO?' [g䩁uh &NPF%JHA$l#K}y3͛ʛٞ]k˞5 -%][S9+SL]SpهXcycB͘qIUwŠ6cRTO_Jyq*bPJfs)*|E]31,;/7@K+qXkM>m~0cx cf,Yg1D`<ط`D)jHV[.Q%LŰWj"1fp<`x/qhy? 擳8k/>wpy{]>_e׷lDB'gk84z.WS{MCY Dn-25AZXI+zmE3skΕEY(WJ$Jd}r9 D_C#5 HFnlG,"%:͌bꌅ8aS\_^5%O*}}ty:_?C<==#/V$Q쩂ĶbSz]:E-ɡ}MR,t52 YfVZ5agW!*Pʄbk!y}2Kgqy(mv j '1ǁk S&F EŌl:U*kǠ"iT}Ou>&&F,dahĊNx\ 9U%aRJ0J.DA3$1Ŧ8p4W ͏"zB 7xS0*T@t:"mv$USU K@fVŖ!FlbPGHp3'2.Xx=ԢM;#b7qvȫf*q!uv%Ë́.nm<-TJvVBhUI#`Q _J@U9qL\<^P> a[Q.2lk}.^q;3CRp"nk#Dzeutg t\~ ]:^4ysV2'r lq=\uO9~㤥=qRq-dH־K4l\AG׶ G㟖nprat hR}lWm1!PdhĆ%<룷Kߌ4hw#Ю`Vw8,Z緿-R^b{p$mV_d6}>]z1;$<;Z9Ɔ8=Sک*vl`JCܫ$z}/ほ/:d{,};I9ua q'!zWKʃDƋr &ldҐL'G*e3u \@Pʋѳ5lt8;x^ PZd}9em$_$ l+|-=f6/z&JY?rLDaM;s, 3¬rLvuKUdT`;9L?Gi7+!_TragBВ6BʔM]wQsFS2sUgQ :*tAxy'8;fTqL?\1?d~u1;vX۾v})âr{(W{Q?է B'A#ZR#,YN!hKu_V1bp% "`SFךJE"MNGQY@7qk<#TgS6"}]1YdYjt,Gk+̆e\?#LF B@jf Mbkw]8ŏO$9㳰j% #15rTчy" VaM9fiP BbBND> \ XL2 dUl,Yao"Kӟ^[͎ӓ1v 2MRE%B26o=\B/p%e릠R֌fX,b[4[Qr}hp{7&Ύ&_SISHSR T9l FTCvGhYerJmYcAE+Kn!f\V Ռ% .'/}%hK)5Fd6o[谤|^c>Ki5:ee.ΊUmh#VZĖ L:vH:6+v?Pu敻m Av3mZ2>aTj u`sCjg(7I&GCu$59V x8q&]levKUxerh{m1_}aփ7w/y6u .6ء2nޯzieVzr|h [l<.s~_-/]nXSo0TWMw5V _yJW\yhM3Nc#]D ݤ8ٌM# cc3T<*N̅9BD3$Hy0hq(-P|/?҃{RKߜ%30~b1Ih)2t$t֔LZPc+F-SeR,pi)1Sp2x\^J?NOx9ز\f d ^XmU?.;EuY,s`ʹH-:0V JcB BˇW8J ? 00y`eA&z*,&bF 7IzSK9_*d oE9r6*ѻFTqUcR8; y{u@ yOsN [w6 ^vҾ ui}$?#OI"$%QayV41gsLSʠ&NA2ʒHPHsF(ꥀjtfB.zLm޿x 8^=3r|Z/:W1+Aъ>X:hmILZP}TbTz0J?*b?zF Mew\y.d'ě/_(o2a< Cf+@USaVb]`[Z]jGL&{ u^0~!g_",/z;<˴[30s,rS%֝ 1vhp"-V1Ykh *;6y7]i l4~1n7ϝdiB_|(\j)'_m^xvIM?m OV) UػK;TPUe.1ƜB;Daf䛗wis<89;Uyʉ6T3`4ړfk*ƈltbh3>f`y8+{u#2 vUq;R:g z.Nס]`~h]^Yno|j.3Ɨ>p<|s(Қ9~+W~^UcVv=5N}~5+ר'wq{6bl'7Rb́ڷ-=ٸd!uۛQ<ǯL?-#~#Q r#ʽ4tmbUܻ- Ԉ KyGoOѢ;Nޟ`Vw.˫| ~~bjU#i]{oG*w#W ,k 0)iM Iőݯz)E$(9]]_UWW]{UcGػ^^fp^{WirDyoşZR?s#!LZ/cҨ}zNsaσ'^զy\`hg#LHΣef! 2Jx2K:bN:']zj|UFK,5pkIBdhb:=Y~9p20ѤIGYsS+z˅\JLks0-ڠp"[]_$L!).hqiwQ:N4E] ۅu};mHZ{AK hXJF2e*aBBvZ*tTa|BU{r_iC]/y m4~]s&K~\Y 9% ѱV#Gn~,{{7?||lh]&):ZCI.@͇f4|hnC˪O= ն#/ʨŸ,* XV#͓>Ɋt8 s!޴ymqwYxF8Lx&2IkMɋ*B)E:q4ܯí[G+V;Z(k^}쩃c|"r.LbAx͸SyxI.1"H9Fc4]O:;s6ӴIk-Jb~vᘚCk1"3cF|qV VY&ݫ%4"I1ƒ=$@?_߿h8~4l|NAA/iWGA SB^}{.n``t!CQӅǮ.T.!NM;jZ*` * hE]j*T.շH u5-$ڿR_5cug}Z_SpKtbƍ4xV+r` J!2o|4ĵy4ЕT1 X-@(-^T.9.t tm) ym.9\y=ZV;&cs=Ѕ|1V~ "+[YvB^ך޲=^R3eH~ȴNjoX1 !ms?sF:lRl㥷Mf t)3_qu')BSw7K]F ! OtMLKBRt<!SM>s>ly!֢E#Vm5"y;xwۄ\T$e1)'&@b*A3K|81zƗI:km?TiEA\Lُ_d5 !L6dVpֹ. ̥t2g>1p[_uE\JTَ֤n$0AH, thV9eB04Fh8L3VDQ.8ҌWh-ɷTBeș$o'KN4k 4GBjCA=9Dїd_NsZ{z)økw^j9^q1 i ˻ QlTxf?\]F8d #]9rRg` , +(JtƄ輕ѫDN1V܆Pͨt$&`IKe:GsAR $#K2d/Mªr ;g*1rBО) D2CmXF&%mH$nz&U#gC9k>>9?#}TmD~ohR5RyMņ #b5ɷ3*0R;%Hː{["&E Se;' 0Wqȶҁ ŸmHvjd% a!X"_pHQIQP1XiuWTU?U]]u>Tuj "Qҵ(5(q(.䉂MP[v>񀖸Ƈ2')|SEyK)&B`F}!A%5JVC 2opS^JXxoU&j%&}Dlb,|2 hQ CF*W8ZpK|*0rf|Јv*!Nsm.<ɪ:&c(gQ- Uȹ]W ǣp|u/6]]mSLo+:,KNQo"~6% ?D`lK|*v%ߚwp,hxzFU-]wK*$Td % ݺxw}룕&\gbSdȋ..2}x6:Y`F:X5"@&w)!2-g(ʥm-r^z1.=·p`emuXyRLYbp 6ڤgZÈ{1+e`jWVA(e w)q4դH$y$B˷ TI}4\eg&$e$%XRI+I4z$ Q!0.3l)R-B`_PzPT::8!1 1I<}1#V>49/E6rSF*@Efj|9퓌 a)K[0$M^PK,KLNIF){8{ѲM^v`*A^F 5- )URi]9UV(Aw廻.m_Hn˜pJ"t*I.a*F !L+v}ӽ7s^qS}ff޾r_0|J5Į]B.~A\0 u=Һ=-`|K6ٸuԲ5wiyo|Λ؅-z^hJfo7[oWsކ*xzM72 b&eyVa4%JV\fϕL1Nre9#5҉Og)׸2-=σ#͕qʍggO e&BY a%ĩ"KDIANPRc"F" z{ KQ3b[NdݚKMn'oH?n/%ej' <9.B$ X9/xJe gKF݋G|}礽}o&DWƻMeBoʠrt gpH:Դǒ kBPa+o7c1dn~j<+{n+&}D?rM"+s 'aUsBƂuD,k( 0:Pxyx )oE1x4"XͭD/b[B'xՉ{$pTFosR+n5 v`T(M? χ-ǥսʃZ꾞I 8^gyaeJ9RY.]"FPX01, -;'7lzՖ rUj]MUD"=bTQ9A+\ae3mzj FN/>-V}e:ISN<~z| T YP< )%S[z/\.%p ŜLNLtګo7 \sAx"1Ih7I$iwcN0bY{[zhy*{uPbYHcD L6hK14ZVɑ $-T#&^wWy4H]ͮº#\z1^ҧdѧv+Qvr0"HŲ3JرӰU|΂#APLE)|\[!gߜ&cp>QB J՜#޵95{HuN߼_4-ĠjN̙tmZGোN`iF2n$ĵ#IzuaHofY> 4:VbɇB'&֎ lIu\k~%$5"r(C+!>~v돧5vg:xͯ~:^J3pC:4ܚ&bGLbLZ5pzb]KFlTϴ`'$$G߿۟q5Ki8U sqV[iQ;}y1?Zw0,=~F&)"Yi* [ɏ?60e61x> SFhs.eO2hP[IE%BVA?:+#iLtor&1,?^9-Y9vБSnl Ge1RH$A_Sbs!q\P~U7ɛW?ӓn"ROl8| yH ҋQzG\_~a[jgRD&$5e )2ﮑ3ljoC^Oի7`!lmi>Nw$izo;5My,eh+uCQojFo/lهׯw&x|_w?Лޛ2]v2sG{ËќR_`IΘ}w<v=7=p ҏӤ-qFflfWY]GRUVHi:C!K RlJw/u!v]gɻ]ssaV}?o'ꈕ~8StX k}9IބzL6%Kۇ[nvqNt1$v6Nf7|ڎȉ39d#R$E{nݻ^U~n=<6^ឮv{܅~ƅ}Org3|W}0o`ahiʬ8=F)ɱcgɤ t芈P*bې>X-""fs>]%{!jhMEc$RmA>:0U19θd3 kLL.J5Z(V<ݯ2tbxT,'>_Fefu03Lj(G劒;P|TI mKvFEj~*JfV(\ یsw|v17y+5hMR  $s*MBQ+pv)հ0>-lg$ i}:(bG(Q F@=8Z|֌ l]lԊe#Yw5$[^A5>Wx,g3A/@q",В|-.7ӯa\qDЬG= S7l . yu,LՊǃ4Iww5$ J [1&E&&oe +*h:t'J62d߯#·A}C ﻈ}Vw;U7(TWR<UG0s<w>OO&L9+DAB9gI gLZF %Pxԓ :9g4D%Q KSȨLԻxKƞflX,PN&_RE|Oӳї9O ۲]#.Ώfm6o#Y9Da䣊H6zעYW QSHRPx6?D(C$IY65b =#F&fE뻀fo?@R02vձEH:Ul rkw?\ O@#HJc RUU+rRR)X:iPjEDF(*=2[RgUK9)ZlEÊ^x/w(R;EF&XM0;-[#zxݑmmyWAN!NR%XYbr *bX 0AEE,l<%],zQBXA> 3?1}91I%|2_(1NuI%SoE*8ˊZ5ӕ"sˆv ?p^{Th[W*˝A;u܁vQ\ٯ_~ }b.KWPgsb;g]@'Qn; $H|k!P@ Y+S ֳ*|bЬSheAe,Z3eր$VK#uHB1*"%d$BMh?{DV9;5jO\+=Q-^+- :% ']%A%ޗϏ›w=9=\Od$~νݝRUw]{ }&b:j4!nq~u7k{^N<\bjrˎ[vݺy^׍|A7Z5du뭷9o'㲙Qx 1CǶSugk իnx\m^2gzt<3Gܰls^!t/&XrU5Vн*:W{Do]u8b ?J"u,>ŐF ;66*2x1$m˧hvۧwdI+- bYb SE$Z:ΨE+(i ,VB ` *i!I1J JD*jGZ%rr4)6#g\%k.S`uѾʧP ͗O}Vu ZVmfV{3ׯd^{e|Yv7{忨#BiIESgD]c}A<;}/ V n]@5hhK" *,Ʌ&'(f^WG*iJakTo>TG `ǣ~`|n~yצc}wi+{KnNAI\EHsꒉ ChJd"PSB JĢb1HHwNA0N!IёD,lHb[oY#`!RS )ŔE$ %&_:9r]UTdiV fl )P;$;#e+2eM>mh59-ﭗphk&5b>/]M4:yaҠ( AAWƢ4N"P|*1Dd$>FLΏҠ|=$ItEfVZa%Uaca$Pa|OQvx3H&?SC}p,9<׳ DLd Fk Z8+&X œb( v\Y;=V-VG š*C(e7 ȠٜFR !^xh9,+hlI"{EU9KΆ`|Mu&y2Vbpi,.)p%mܶYk^P{x+fʛh$țDdn8u3]l0y-4O]䍮Xu7Hۤ͘wcζ<*ta2G uB˧[etx>cp򱞘R*XB1Ô녥E 9~(R ^h )YS &zt5wȹ^ƣOEӾ(vn&E1^0:۵*ֲA%I;ܩzurM3Je/7ne(ߋ>9vW 㶭_V/yY 1j*!uP)*R ~.ʥTrAr)Z]/@ RaJ  6>a_욭`QWoY:)u[uZ`Z3yTщwwQII"AD r).` e Hwxh.%euƃ&–!S+ Гe&f촭{أ=g\g0L`3Fv&`ïR8N+:o1 o#L5_92VIlA6`X`U&cT"5_Z/ZV;'<%2Sl[w6ڍom8M•nx\C r/\բzWQ="|o[%e${*s}60=ML;.œURO?ZF/{wꮟVGٗ@ %?i-uܧ49 |78*(D`}W+r%/(~/Ƈ.JqB)=rzA2_%^o%7\g6B֢ qVlm5^ GoOU?jq[#ĪVdNV Vrk077GfĔǓ^F U'd||XɁ+GUbIu\tIz|CjO7NMg^]do4N{i=dM⼋UT70y=M69uSe^+}7]ޑzS;߳F8:=b&~>w$|>r#W+Hc$9{6ZVCx5 Mg=_ r+q37^OSVW AQnk˕F[]Sݤ=)p;bg(PF]?Zu1bBI(&>%՟W:hzdrJyFLAEޓER1F*~,^ iH69xG^;]KEj$uu(!:s zBIWD01DK&P;lO<1U``VOV>*jC7+CL 8Nx7y,tBc=e'tL{Bw4t»si%\ _W=Jӵ@ + Z\C̆[jNM1h{ܥnuHu?*u?h?[r'-}W׳. 'ߗ+\" ϼ99mK-(> ѯ/>o4x?\H?"C~7<(.Ffy-mVm0w׏vBZ=Z⳷)Cǯ+?jܶLK4;7;L@CTT " =׭?CO\zeX Bu1Z:bOҢS EX]q/$^5>+ IP2Tcb+bV&)H3s*F'tFZhLt@Kz׋uFI~~#8Ag>MTʄ)ejZRJZc/e*,eB QwO-/Nf.O{3>*#Y:Xz t23;& =MuAwH *rtRmg霢biReTI$ iⴊh1W+@bh&ë|rm֫c#nל_C+f`u>h1PVԀ&!fv! P築:t#Bn;b(3)-`+_06 pQVv6% ,}S%P5U1pdm,(9PUٻf 57:q;n9wA=UzOG ePo0(ű}).hQD9EjKЊT*BQ)m4xj|`}^nk ,ü_~p^|rpMTc>trբZ{|f'Q)vHww#m*;j$G)ZkhT̔J"5]t侭]͹&hAQ)\%s\[3v#gfUnq.Tu!NvmW oiE[\^H0]|-^vŊTЁ8R*cE`YvP ؚu% YQ9 JJϮo"_ۚw95v8.Xv7V֞ïxc d`T!P):+A*Lþj )AY!xI'&HVI*m;!(fHTO:nևK9)ƱhnqFԝ54^P0*P@d &I*N*^;R3ȧ)F tŐF_CPL11ZtpJI+r:8Ժ;kn1a}GE/.p)ٍKՋ^I/Nzhrme֨6Yr sJ *B)ˤGcч0}xvxp_WF4 .u\Q,(YboHmuذޛdsZoگG '`ɿw?msq;f@r{cANsï[X-8>H!3>HŌtjMlTnN}N"#ݯfK ~9^|I~&6>oFljd{Xk8/lZ/7ƒ%DrVup{]B[5j@,yjR'dRkúfWE[;/>;)ơҐ&p&fLyޒ?;Y[jr7lϦGbDlz8ǰvaCް>xVؿt<c[RUx(r[S1  Gȅjl\6<-ۻ ;Q|Omibf,'_c TѱM([11S*yt5ciSDcDDhr=iX@Γwf>(6%XklŷcٷeĽ-EY:@I<-\nrt`g_1zh5 Ȱp]z )XQpS0 ^serr(:it(&壙TOkЦ5h]V (ʈ= JDKr^ ֤] L k$Q:[K)GR/c.&ehSX .#s!<4\/|Q^1Kr^>;gwcCz=xsY?էQK*YBղI IN)e,yNk9K*}GS hJxPI_TN,M:Ps-W 󋋫-F~{״>Y?ɳ|&RͺzI3릑K̺iϺiT:3ͺgh \@.=|XtciІ ~a]TmP@TS{ㆉpDa"{c8i  {|=vƒ1*N^١.UW&>BHS&R<&qk. &QM(Yao9{!]/nˋY`Gc{u9IT*q OdϾ6Ƒ{=t Bl ؘaeP *Y@<5ґXZ|[imη]ȽXw#g(dIJ*ɐ' a%莣" ()rm zXuؔPycQMcـMqr ѕ"+}\-hWPѵ;h\T,WKڨI$cZ;@an5"oucM"MP+Z^ȩKwf:x&)Ak&T]CY^0T1 _I^~.)^/R=\6HlJK YQC^X!'[;æ{GA *GGo3d6(%tC W6 7uet o\K XA &v~Jg^}v?06ǩ_.N$ 'q0SIĘe, h,?& bG8G({q[a h A+`FnmtY\$=Y t(D9i0ʀ*cFf!FiRfGJpES t PK>#G;=e+uBH>\h5(=[;vwo&5u|6%h܏/Œ'1VD$ Jy%r\!F,p08:na<2vc-8ϗyD@3XG\d:sČ8/,*kC.S`4d#;Sà{%:qʜO= Ivϖg#AIeiżMBI q`QB';GE4=b͙ m#Id."hv'^7z, gNrƲW9dX`U Q$,ZFd4厍blO+u`^ wbfq էiᎷfa&/L\v$ںrg L<w7fv !3jw;dM ]‰Q ! h,Lg!uŠ8'3*8qrdvM5+)*(9|E؟۷!,4ɹPEޱ, gYK&Q:`,RA$)IZd>G:jWv`$MoV~썕챢[xQ_]򮛿..|(a\ULzCI tbK?xaԪ?Ͻk2*~m_ˉ.#d iU®NhEFɘvjKBjw4?V4oݛ5/zOUk77#X#-1W0wK"jn #^גu#Yv$j0b0ķY~p(Z&  =՟nlmu1u-s(C+{?0?[XZ}zQYT~{qMg4eOza: MZܖ&C̈́j9?̄sN0SM;߳^__!Ow߼?~7 _޽o.#i t]!g@<~}h!k M-+wZU|w׸S^3 qUlY% Xbd+y6Hhi|w'ߟHJ1[S(g0.כ'К.u?e*$!Sn P8\~?](go["dgu"DPƳІkUPevŵ# NشGcҐ,R)ۚu`) m- J`fF()Ƶ2X0h]ڞ]j|)˨n}7ۢ֫s{0EE 'S/+ĩ*Ԃ>zYʙe)]CwB9A,b R\_ IRyk`A-*+HG>rmggUzg20ځ:S 2}R)g1 :y5!fnb]=Xy`u%=πHA?t.jFD 2dS U-V몑C{ p(YMkZSTL1F*W1Z+=vm%mc'̽0L žӖH314m~NԈ@e6;ERS=}AHi:dd>haC,.G7 R^Etz+ c1me3O7xL2 {SjWnOt#jzo"\{_kͿO5Bmous{G-RׅQ`0}_K{ۂ.+!-pZNRuGRVo7iA% PJ&lq^r ztʩ.% ]B2ICEn:LC Xȝ6dh0\[e_*\蘪tkqR),I( dZPǮ/TN)Ery{O>b2E_WС"F H2M@/0*:$Ô#/QoAKb*-V:< F !ԎqiB‹3S043*vBk8t¡qV qʓ\/-N//8¨'\ȐF|tH%wt!̀k|?h;AFbRu r W()34iXUFQ'CP9 ;,g@` -_tTR3F["Ƀ448;FQv\wN? >`X,*q(IJZa?TE` \ii;\z-!rWWvOo@2pu/ՋG[}E/Į%%*l\{ꡟI\ l% \z \IVupfe 9֯r6\ͳsެj?ݬOO_|Fs@g;bX1՞ 9R(_~xj=I``PEZnڎEʶPYP+!Uز+WWW$%[;rJX< "WE\WEZnWEJa:nVt0:+UbK7iI/k_SgGa92Y;70V}9]؄:;pLTBS9K1PNG2if`pD=|KWDj# a_ZtWUoWdd's!R!y)$zj 7D"jc3)qdZ*K *$R4l(`tMkSԹ`PYn̗;W͊;E4-UdRQJĠB.''V_JWһNtYfUAͺB*YQ-57j^Gk%wχh=W}ggC=QqtDf2l`&YG.> W&I)+RT<7)q6&Q ֟Mwn8m$}j9i6ASMz 1_}!*E8_[+<򼚝xt#Ü,T\ *WcٔQ4)IkN ֑<""y/Ͷ8 2cBd:M˜F)cȦEx36:  EEpmD䘒1 Y`a٘8{\MJѠ>l(<"Oڙ"I?POc1jy@%myz0qI[,`HY$hR &C?sk5njPD٬KRkRb@&7$wsTL(Kt2壶,GFagύԛӔiJat2 ⧂T3i7F!3.\lёUwh Mj|m҃j*hF f OϠ9/E62w*P|^|NڤcksREd(OJeK"jIBr*:kS(![|ֺEFV6U;A;܁Je.G.l' e$#bJ+h=J8οu> }#؟ 7+'>Sx_aNmwql۠WߞyE8AAlQEd@,{a`x[Rx:Is4Y>}$ytvlh$ ɪ{Z2 L4)gm1gS,xep6_WzPZKd 6L8-LL,9R,g8S}M.+~ Ik/StXzZn|tBl#hhe*{ Ѡ8f2ӿun5"e,:bDd!>sK-Q1ҙVr@+"5H87ȰRc wg%kGr`WAZP;9Kc$4N$]x_ e\>)>I_?n\ojS&x?Ob_]1Y_ E1C@qN2(qϏ.L+Gn2I~h2%8J[-o$'hf`=s y*/uOnWYZN=O,J,ăzG.saX{ߧOKW|hd_/~W#[*y<L^E7F `,6N_Zq^OWTjCmi^7+^܌c͏oo-= V sv>Wc(6͹.wڒt f'[RĘh(|4|Z df?9Uݭ.:m}EJIsPrǷ]2\ZdkRٹTn{q+=h|UlJN ͭeI8_MUsf粯kQ**g{yNJ_~{~돧~9}7o¿w@h{xh74Ms#hZ6ѮMvyK.WF- CaD`_w6NiMb'r0)xR@_ /NJ揇.֐=r|A%*)Ϧ\-D3*'*R4gh듆<{|`(pT;[[z&8h?^mnL:n]cg̽1g5_MwgCbK4p 8Kh4(*ι* Zc@! bF/-K(rȭi[z̥K/wVp3os/\dM*b>4M\Y1?ҊSM>6FA^m~m~>ovַ7bQAnfiT}07q_ί5 us0-ג6ʹ6>O? ەe*xVZ٦EtUn(E*X9Ad&"بZHJ:tx&Ecv n7axԮ˭v? ?lI\2P> j,T.Z$P7 _<{v|l}΋L˹Fb ,LYpA .1"1)o#;goxޛ={|_/rDZ ˕b7̛[mg߼}CX$ѩH 3_f_޺@S&Խ}7SdpM[dNΔ;˺{98Y,g>q+Ƙޗ:Y!!"P"< "\8.4v.rz"=ZS7/0aTfOvf}N'Ty6?Y5~07}+Uxn{~ӵIh3dpk=.}>WKͲK}HLWY6$~NG+ǟ_OiH{eb},[W nbͭm#m޽g=V{]ݸg t=/hu}]XzT;_4QͱAꁋ&;m֬*D su;c=dBryṍ3Δ64SBB6R1 "k}{ϰfC0imsit< aW;(d& 6<ۥ2(E&"O$"%\xڀ^XNL7vӿ"O|FUцꀫ =CN:Pt3J=Ku4A(ONX+;AG g$EZ9?)KRޞŒ$%JRJ[eVl5j.Fb dD6jlcSJ'šG|NP 5'+S b:km,"av!KkJL)}zٽ]{[{HU ~x QͩOORK]cn>C_ȗ"N PLu|JӌdTɁ|);ek؞$K{HĬD` @ F)sXEIpQb@E6TX`ACʳvlw¼E7ߺ<;xxz[1PA/sfgcYukv(qkvj`D?]i~[yG1o`$ e,Qt舊B,7TCETTlH6a$#X(d0XftRcCr}L 96QUlrNP}2)9SPw*hnl$7g˔z!OviopS)!d,)<Ul] `VNjx.!%y NHmab"^<Y+g3$FH%%A` ?`}~V<>QY݌h6j!LOemr9z1=Ulm>e\|4fn~=^?CߦNXZ y[BMt)%`Pd2HPtLk2`F-hkJJ%ԉyH =9hI\5y!orҨGT Tkdl&؎4tmbh8`jo"/*3y^^\]a7;:y_xe|>|m2h1SFlH*@',\+g;ˆP=5J@l]ZY_C5| CRTd5MLa+\&knyƶ>m%fb j7ӎCAm1`7J0FC+H`!`Dڸ %1RTB-S(m| V!(aIYLJ0F/|`AD6٦8¨x(L?vED倈"nxʘDe*s#"fUF&H8- ! xYfg撀*,AlgE]e2,EřBhd1%-HoضEeܳCdi슋1.Ҁ.nI@esJZ"J"z"SA|)*QDbpJ;nx{$C2b|t7(3{Fp np~gS-|:9W{W D%riOxCutuAQI yKOݫXndal]9YQ(U-cLDa%Jt)dj%FK`d_׍q,n6CuH#櫓%W/& cG旼>5ocdePݾ>{Z=y;ugp}N%~פP뷒 ? vzwzjyCoTz*.U<׶ z Q={zܟNɍOf1wJiD]KQC`jEGR猢.Z+NW{LoKy;w :r .c^ZB\XdȆsIe)S4 lX3Y=={\v\/Tk:ڼ(pD{'̪bm>{O>NnրLHrIvtsC#$\PCR!֐i`3miVȚ,sT() 㘒3)@(njz*-H&i# CP.mH$ta4“ʌ6`k\SflD< >v_(b9yx\ukn_y iSL<[޳UQ]@ VddVňJE#3A[̲ZNIr tG!Q;zWyFaKhXF5j5ZrzW1|xMY]]N:|c 2hJ 3ej` H:%ȑn4M' jhsuA<^:tVieUZ&w[ -XBH(  @$S 5SXJ0b@E(CKPH< R\!ԅ&RQ@ĀZ_a"C7xhFB 5LORWNyv ;$g^\| qX|R\S(&RGӒqFʇa)QN!%UQ]Ht mNub X'2wBgV tR馡h^Ґ{ ms}Za.eH6+׿"c="dqE}H` >FJzSlIqK2m~MQ)x* fElUS`*icLt'C׆}Encx;)܍D D]-P'Ȗ*b9N^ %$K}75Ԍr9.o 36+V :kƆLT7gX9\HɑY1GƼ >@^Ů|N.A+S/~G?ɿF~"A,kN)ARJѩeHsDQL k*@m ْR ) Fk/CT4 c]V5[H[Ec 2#C@ʖ}M"%EFR}_M"rP1ek*f}P55hvq*~UcF8zĝ_KUqPmn#du2b-ZVX xYeg:96b,dYgB ׀UbHZUzGl&~<ڸNiuc_- 3圌%X .'c/"X[x)j!dA.~9ɬwҎS8 .@Mě2ȃ 6Nu\Qَ$ُ<)q}(nrm4v<\}?N>XO.7GAO8<=%56ݞ Z(GcC$K2t)D#JCPr[c6{EŰaz/U/0^MܮΕxNHc'>9կ>.]A{Qߡ :RlEColZXt3W0;cco-oJoAŕ @D3^ A=2e6S~.ytdL'T^. ;Qw0Qt#{S[V:r=ȓG G9J/6#J!UVx#gCָAM?Btp!x`ͼZ_`>-yMy[ۋ6`GJЧIg|}ZtkСy})_QC!\u3ӋO&uR萵( 6zĤd2zNhL(#ƣ|p:-D6܎Q".ޟv"ؚUiO@8yrR\"#T^HZr% !ϗyQRZ9A+$Ķ21ʢ _ ihg9zLZk1PTt d8*JQɂ,a!a` ~  R1n䗍:[_Ghi_ߋ~_7L8QY HF2)W F jt8xO5< %KB%tAiQ٥:bLGDZSahGDז%BCi EW%3XFzBAgn1сMWDl|E %u391=?lA1Z%} Ռ؎Lvʾa/]`mI G,ҐWI0ͣy܂*^LChlW\^ԉ507&݆ K3z Q6 cnN>^]v>LV&u',~"2rg%k b0]}~}𨶒*}dg)\q&Y8%`?oY>rA=aLƩj\9./e>Kyu=K+|>xR_&7T78> S@ ƒ9c96' G E҅?閲&Lw:ѝLebtx.J<'{.J< J\ .# iR*PeEI|R朊 5Ś[NxP--ꩊ"Pv@J'BB_]MɨҺ @+q)q}rP[yu{ jŢB] qo(^XXyE`>Ym3W«|bs'=GLc) 6˷4/t:P7DX%-LkK!l2 _u#Ix&^:-)^1,6vJ6U`kjl4[U@![S Բ d''g@1)[8tuBw!s j$9O9^r3z'}|0J<(S(8-Ǟֵ%RR&{[ Pm[g;=U Uآ[;kpz8|{ۗv{':"gH;HNaV +SJ3RpΗ%⸩N UO]> Fznd;xlO#ze7cPQ(fJVqQUZNhL٢{ӣW? \4* CڔJ΀N1@N_whx)W dDڹr!wJR%`(SBHj0$d+E^"ИSTxef_:e8atu`;zc=p-rCtzlOԭ! RJFF V*PXSUA%lalH%%R4o]Gdc=~Kav=i*3x{(^HcWɍbrK>2B)OBW̽+ndž~\A?)_]Bu][#hMᶫu3 6aJV٪֜q8euOd;5>m`nN|X)=\}Ѥ Mãdw 'mOn.緢VP:>}~^?{Ƒlr0`s6q%O1E$e[9~3|CQԐ6` U]U]UԺiP.'m Y%CNR\4%=(b_i^4gzB`JzRsоW9XF!dR(F f4O& ES5 \Z5#z!X2H>Hg `|)VȶlFΎ K#k\ P2fp2{o{0}ngxqn}0xg FNc0j1(SL`+r)5%A$Bm|Jg$uZ`͔SRF`Zlb+.>9Sjz{Jt3Ғ~&.wH\6Vx4fR㴃x PEL9PJ87013SS9&2e8 EO<_ASEJP ZQ"D9F؋ eʹ.&!MeӊIq/Bc"0#0XcJ~bӐqNO, }yQ8-]Ep× W{f)p*#^*,"E NsGc0F#gDyjGE9_c[+cx)Vհ#w/[ O^V=a5C)eF͉珔x#֚qD0=4'iGy%%sƗ a#Z[)n>>dn.>\|r!jwWUi2}"6~yOv>03ᭆt$iTuiqT΄ⱆLsDEA93$d #aA*|%6!`BLā>c("x>%Kȳ1ip3`qt?4,j]k8|wobZ&q%a;, o6R^Քq]Mw#/:;"E Rr UBa@V2 0B'n$/?V 6E臺NF̬;9™.Hi,%;Mh$wp2tu(Ʌ2X8|wyhډ2<@1b SHu@D`" BFԜ ]F 8A ;AB1s<.D)Cq?ݞIK0)Jy!QIR|>0kRbиv̞To D6X3&iZΰc 9PM مbxll<!xuz>V`gE-6`M@ZX.,]L`H70!]w>?;O('U`  SçSynt`f_ygr5\Xq3DՂgnYc‡k8+. *U&+c]ԯ'7#rZ?yS=xy=~jY9@ω2^\sKbnE?]O*ԔW?^`5TcOo骩 ȭn7[`DJF ֡D}v-S5J^gli `J#ﺹpH]sJ?]~u.].t*u:>g{PH ?JQG֥5Mh-V`.DŒs+`S?41HX$?}bPߘI(!u!ՆFƺqK`ׯ{:|?瘨WxuK8 2 .!&5@90 >~ Vݻ&YT歎z:;+uCn~PbޯQCpNHcvCwZْ:Ub,0J­j4L}}5 ?o&2K@V;O|參@uuDY* A gs L`#`"$>ŠSnyJ[p;2edVgģTIF-՘P>(O B1s /Z:ĞaH{U[!=jm)j'J(IF{h!=dr}\==dCF{h!=d2C<!=d2CF{h!=d2CF{h!=d2CF{h2Cjd2Cz!=d$Ly1AR=A%J!Agغuj*q/'`Uqq1jOԌ]ܲ~??=/jVZ0_֕v,JrK/-ARqK)Ioi\o^\usÞR282`̕ߓKT[j~׬?h8U[uA#x"G?U-bC_Z zTN\JzZ;֞oiW veZ_)0رs1Jv2dAJv[ f;zH*.L6-7e'YO~=N:~]9mZW`^ޘoZbzmz%vҎ%5l,O|E׽ߺ<wYM7(Qy3p3nFH 7#f$ 7p3EFHyg$܌p3nFH 7#f$܌p3nFH 7#f.#f$܌p3nFVbSNo5v&gHZ$]f]pVYN^y,UW|co#LE*"SnF'w_R@ Fnmi~ệQ<`PF0(eUEOHՊϪCpy݋s=}TAEA:r6J³M:}u5n8'y>-l~9Va*z^LkwV~QEYUE?`PŰz ߙ\Z>W?PayA59+`77`I;=`_z̄ΐ=YVfv%B(`e3Km*SVi23Ń /ںOG;nG|ڇo!J $35G4‡qLRT##ЪY5*Cz )`g-sQ *cm٬k 9(j#cK)TY(x,R&8v% 5n]<1HBiGk.0Ű R}B"L^62i(@RGTRJmɌer$)Dc`ƒ4#uNٸ'9=-!I/)LAo*R\|2,X_=_x tK''8|gEmJ w׽MK_~{:-zmw mzjIQC7jj?_Kzжa5GҴTLZYOˏ>9 K*Y0R`D$+hP8DT 1QCq\uM`,akQ`7D#3(A;A&ܺ/"sἨ>\fձGc/~S \R}&K˭.#:F *:_Wzv|&&Ae r B($<3c(TaJ/_/Ϥ#TMm\, t@YhuAK0˵' i*z}~قKnӢ$JђdKk,)QV8d8(>RdrGGlhY 81QZ+Pq.54pqA`vxlA&rAV/G)8]G Em בaY.#aREbD hlr5r]B'tKJ~C:r ;sx}iْ3|rR#WS>G)—;[N~$UsJGqbTo*GaA(9ATUab},|QǽYU= :=qtz׌g/Zo%5TbB!Yθ!GĉvTDHrVx劄>`GcҪȏTϋ'Ty%[dm]ߦV>H&ӖM|{8@#E|BZJ)roU8UBXulɂ. М4]BJ |* ( [@q1BXJً1FbԒH :knFU:[PI<[O˛!dh'$dXJZ2^\Mnz5? w^dɫ>8+ǣv9q ^slcɲhkOhi YjNvR6A+'S!brf&i/ Զp![R*IR(zrHڦ&|NT6'FzD|Q8ۑ icP4BIVlW3^SxdvONI7y֠_ϦW9b `A9xh`Kl@Rz(iٙC D%ɢ^ {> !JS{IѨ`T%[8rE(3%fRVlG0H+!桠v38jFh"LE+0e-Ph1Xê@xtܨZXmjQH=CTEG-cMȂ"{R"WFwM97fl>_ѫx(L?1"DI1qʮpFDd EvDqFXB咀[e]SDtr(V]1e,EřB$5he9JoQZw1"6gzo38uHkʹ䱸pqō+Z ЙrNFRIIXpCDkKf)jQDbpfv @mV}MoW~ܺxGJ?KwH?Mveyy'"/8mBMȽHGlzmEe4ȟ<]ZyF'O YI®x9LV">i)SSQZJc3FO%FR=N֤>.yKaEVبykJ.YHYvۄa_J:i2>*%$Jq`lŪF !L8!,m*(oD1^m&[G _J^{z z;Swz^j=x=XE[38g*lI]n |J6JDBGcERRuʼnbL@hf(qY`g/(bIjuoyϓ#₇l h!=I Vg!%eljgVHQ:"g:cA;AlQM PΖ?Κvdfo6)*_ d8JJfeR@T!.H3Β!"e+/T ZE@Nbx_3$JIK! (1]@ W·8ineZGd"P6QdJlQ znl&1WחL' V(].Ӵ2]ϔcZLDy6V&li&"pEeZ_j"4>l|gPwbcԀٹ3sc;2Q0ԕ}(]`mI G, ̣ؕy:cuB<:uz=kI7yFG]9w$jn"!aHH 4lq+Y&s2k>SdVo?\SU'{Ca>|k'Ӟ`jQ"k׵HZ$jy T={]J^6IШsָltP(yC!'й|)~:W=߲ܾ7:ё5/,:S/giO|StM%x2jt\[Rط9ЬԮ\iK.Y1,:%!ujE"v.>ӥ|S to 1xF8BB".z,2:x6\vx P)X$,|00~G>)S~9?8itK}T:8ܟ(cݐ@Ѿ^o`欣K;'_taܤ3e1"5t%٩dJGԂpA̙@GttDVw, Fl΁ 9UTҠc 5I%m&NBqSufD%)*a.*RE&0ȞdFJ@x8;p ]NƿP>dD4BsLz;k`(ey_>YnNl(T1 l0Y9ɗRN!!P?Pļ3DZ6;=y'^51ѣeu4ҳQw݂>;WeMlcٙqY~Ȓ#ճy;|F c';Z#b  T}ڸ (tÎdHN^URZT oy"+L })5%,̡Q@%"XiE0d|-rه-)g"y82Z$>'VPzZI!.gf`șt:Hu$T(^epQF"(ߥsX{L"I0!JMB[MF! Ie)E PuB׺M&$YIH h5ge.`)d)uRZ9Ŧ ػ [C ZU|ѵO bɔ-&|ECnET#S {/'6aIh\Ƥ zmTK7g ʪR5͎Z5ؔ {rƀ&9!|ֶE/n^m Ar`@}96mܧ|%CSژCѝ&8I 30@$C"ɡ&ݪ 0_(/B} r'}41H1$+R5!Q= #qz}4,{ʙ/(t.}u6w]{}G,xv_euǭ篭Yp^Ltzf{w-;n>ٽM{eW8W;r>\-nޖϼęy;~Ii]OJA|vKf@t)B a.bGONgAɳyR93{k-ms|f|;,8%Ow=6n\Wo:fί|,nxQ}̼t|W͜뉯O4* {#n{w{<6nד[Q/'˓<>yִg$kޜxJdT5ŐL$wt ;"2Y=՚:5Eyp,ZST0K`D:MR&"lBbsNũ 5Ś5]%$=UBB]D B]HW^Yy㴀,i [}MqˇJW?_.ԕS~t:ե,h_P͏U_8QYU'f7d5G^фU\#%Ғ8.^1@pv1柣*wpt <<{B  #ЧZ;Eճd|5kx4y>%թ> "mjG,{ꅺ0]]Y;p*}0]*6GWUに*VWUJJ 1,0+UXJ+޻RJ5WjE[I5U!NA]4^6A_,<NxXV|X4ۇ}Yb!+i?tv9/OvyʞQgeFt, f6xr(&TN!d_^C@зJW`"e<˜] O۳fypHt bQb|+p %A!HCdL9@P MIh,ktDXVMXupKV2' { K=YO̕% ?cuKMqg黳K^'f<񥞯嫉J9ցƥy~LbۺL1no;{^{C|s;\(;.߉>ӜTd]"*YJEnA* Y6^]p VSATt$gˢy뾜h tBgE{'&OZS=(QzUH慢FkO}瞌z) 584\;I@x١55_ܙӜnz-H -G41% $LjU;ᐝ +fŗ3Z UNQ(Zh`9rmqXb4rS޵7q#26Px?X*u+vj.R)1Hl+.k )DL956@h4~'!lGOZzc^ `hy@Y87 8g؀9$φ2jhJΈc`/öX:²CF3 Ɠ"4*x:z"&Id q5g3̕&3$d %4-a*P^??9 ;aV $$S2`-`c!UQX0Y3ƄQ )TD2'TNxJ&iH&HP]c\aDzn"Y킔P"9F:XoC{$)%*E#P?]no<)*L1#)C6R 5)$iΩH*͍чD*l0Dn 'E $Hwm5nvj -`P8W/]-DS%wb򈜇qTs4IC|`)'Ja40.N>Qg.ɛaK DVx-l&kX + $"AߓΥS2Qh J:M~+a YiC(XKCV+X]믡C:@~(eԗ8?>:BT=|WED NeT ·ZnRPrӠ7n;MWV\g7T"O6Uxr3ŸͫŋEd8[e悙KӋEٷT팣Dո880ivC9C&jkb|uMuՐj@oV3+,oa": z0bZG7uv.;5SJZ' |3I S吪攞f\}~?uFeƦ*uL"($OAs)JHqnF`XdXE$Rjٗ*NGPFd [V(ƜbκE:Bmy5 =ŷvݺc1s"h&rw$̈́|o#MLmC[IxWm&2Y\b].|VbRy }mP&qx=!f$ *VlIWܻe̓{%BS⣳4jhqLRT##Ш5*Cz vj;^s%X)*۰;/(=jLtMIR1{P!CGX}1$vm:z$HQLwf9AO ҃J<^&B)@RGS؂hY'ap^5;G[鷶nʺOyz fwf5e\僙/0:r:ކOk>]^6ri+{e߆N. |w R7{{_؉n'DtnU6oFˋ7έ|͸qښYh `i=I˝YZsriqۀ"gvP9  OAs0nuC N"*F89&Pou05\C"R8Q7pP-A4%rs|)|ޣyodmVaRʵbD`APpIQDLpnb.m k䭬ou\pրy8hZJb7+q7LBCZrd~f7-tϑ[!nc^~h-O2ǣ7\4Ԩt P&+OgHaF#ThҤ%QxgΝ6EiLevz t8Oʭp7 byx3JK#1cqD@5BSA4X]BoF؉FH\>)|P ߋbJRLY*8EZy,56Ep`R-C`%'ֈ#g' @Hwjq)3 Xb344 Z;z2?#xW<5V< `8GP*X)&2Ԙ)GDBŝ9i`E>(fO8V4Ϸyj{/V&(# Ӌ ep0\L'44Jvv;s{ݾ`Qf`˂'- XO?`*) UW&JxTx `Z8C \ _kW>yӇlqZ=K~k;~f]~w(k+t[8)Pex^ /KG*+ZA%;+:Ԓ)ױv!i˦OG~*)NJo-8j{&(7JK݋kRjIho $L<&~5ͭz<[!ainyWjt y߫YW0󔚿cF[1 q[QQ硸Sb-4쮥 ;+n۴*c` m7AD頪 So-ĠlV5*D C5k*xd wf3D~[wBrFڇIPD"fk^)">c1:4s:ڳO~6@wx;E2ᥲ)O+F$FQ0ĨBlRQ˹#1f`9C&zA($.{12UM&R5{$m.{]lj%^Xba/xAI,QRg$4{[paqd^\V'1y|Y9<ܢUHg^ ]1k7z5 c7e `܏2<i cgQ8Tt1ϊ%{)/vBDΔ!ISv$mt_0ghf93X?p2}aUw0?[у׶y>ޘ((Z8̸ QvxlA}r\ s𲍣"6WӑaY*#aREbaCc~ذ&mxuf;HmR^#>W{|ot\ (~ܒAA9AL19{M=UvbOoc;€ފ@AbB!Yθ!GĉvTDHrACszr#5U&n~?'b]:ݖH&r'rWn|inqutrޓQV'H%9c+@X鲿׌'bNGV #`y:Bز /s&9rYbaL:*L0 I=LG eZ30{-#chnoVgclVe7Rևڏ~7cpcS{4b,R\pMP-3 iPB|xf! b3 jxT>s>`.0 DʩRS[)Ž'jc>QKȚ},}И< zԩ?Qӫ%yRqsb4{p)JZh8$H5wqH iYd, 0$]n.CWKf$'+hdk e4aU녌 6AS)ʚ#brf&i 6AՐ-)P!@( =9R$mSaK6X+mN}ajy* 3qƾX(c!XTֵ/of%mq[><^_̷?ŀ_\L^8b,d- 6$(X)΢j2)R1Tc')TR %E`%[ضDљNDK_u"q˴abqPP[=jXQlA" 2^벖 X^)xP ljhc} Bf訲eIYD#Țd|`BlTMa3rި_Uר8Dl"jjDx̐DeUeFd EVdqFZa*leK<*"#Lm5*pXP2gB kL1%-*ocMmy;˼Ձqq8$_g3.uc\G\|У%Z!t\ԠIX-΂d})j!dA.8g+  b7(^q"v!n~|Gя, lE pEV] tBFtQy)"O9`(o/ZǢ6J~<%u@',bykJ.X4Z[{^GNRT6hZdbbҠ)1dNJ:4*`bkgh3rvtb P!S_6r^LJyLy\h捯ڙeMVEViZ[{JyyDP\O[쐤Wd2zBm1*Rꝡ|p:(!%o< 5E(ɱ4Y JqlВ}#8٤d M̋╕JUf |BYXĝ$m'QM(PΖ~PΚr!?8v.xQk/ K@QQ5Jd khҌd`FlBň8@KxR ;3D}X@TB0R~Q.!C0)AW֯oUfVNZ ȲPZSNDց#(m" Ճ"TB1 m6SsFhjhӌ'|m(\׆?rL芵|HU(h"ͧ10QV§l/fA7z09JXdOEQ %𞱅#lۑaz wٰ" yx26lft@c逯 z3kۃqxHY>Hҭ("QPV cXOV>Yﺥ ވ]]).Y 1,;T"uj"Iz2Z-x)_>{~,e+"(0$re ٢b`J*ELrje=с>Дm\.hlQ:TϤRIW/[4m_L7kLY(-:0ұ;JKB-k"fA9hdI<+eRl75$JA%cPQDiQE}Q:FS#=IeԻx=ٱ,]ͦ{CJjv~Iħ-+<=a ݙט:s}Jx˳|rSfb]&!G>z9;M7ꍥӅvҮzg7P?eŐ}|> ]kmp^Z]>{lXN'?,>N.اnڨ"Cs}3/o0?OP}+ԶUuׯ&̍ӛٚT֫3.&KOҴث aϦy-'|F)cw6uTUR tfO&`Lw ɡMlQ^zc[4!'Ka#/dAel* x(.yUVOIQt.)g"! hH${[Cyq586v@p/8xl?D#EE%MUÈVY܃V"Emt[Y&DIjk R $O,J,u-eB&ԺY3rv~2 I@&jtȢ Y KC S]k&T!LyDRXop(Э t801 -T|9@>$*GI{/'6u#aQ\?\QƆ$cJokjl4[U@!KSЬe[MNdOb!,Z}-k_ۋǒi[Π3@SwmwKt+|ژ΋;MpJ3Ƹƽb/ !+ FB>]S'}41(֩CJj%qlݺ}tu7%9kx}d)j ii|{Uzx:;b2_,UǻwJCǣ;ݯIc'\}ڇ'3~|e3o|jVWn׼ x.'demhk\-5tWiyLww̗<y,(zm͟6Zy!~olrpђQQ1 C !crS8l#] ^O쩌t K`d8MJ%Tmq|R✊ 5ŚU\% =UB]R]H+lqZ*"q#}C2U|JXG&7.?j+Z6%jn_ZMq>-u-j]fQ{aSWk2#ʄaQɄdTjz&Lr̄*3a2t4pUt,pUfpU$q+pyzf_d'q"VBMMo&bhys脞y*Sj+_ u\_ =O2k7lۄ*73q?m2 .}]<[fߓFڧ] |zW a^}-#KDy 9\z%+ׄz&o&wWW2\aK_^(53߽٩9A|i;ARCL|*Dqz;wY=ttYQӇ(T h,Jc*N ݒh%Zr \U><bjI z*G?WAcҮ`Tp4pՓ{4*Z*fpk}ۺCZ6\ټ%_ >YDgon;ֳ:H/c13s좈CAU{Wo"׫y v',Yo<2ͽgl/Nud#͉겋* ( $@AY<9E$EQ>{d{ej%%tr71hɧO/4_ӫrʣMKO 櫈o>wx0:swן-a=-?e uؿ<Ï?x<%mYwW2w] ;:6ugI6Kօmi:["n N.w /V:&Mt }6hkF\&d\”YKE%B'h6ɪv[q/r \?_qt;OWjl!^\Jcl ?f#>(?33T=#rUdrQcR+ZTzt}5휳Of;ӟ_]q _+k_J/6#J!Uj}i .Fb f[( b=y YKQ9(Z[s,|KQ,25V"iFm>F!Ul]҃ ̙W;|㜟p?Khf=ji|!8ZjdYa6ꚴR}|Qܰvx_emjQid2B [t3 urjTu>c!` dG("qȎ%+t44jM@FBH Z]6|i}l,Eh3[%|*3Նi\][s+,'BPkw+:Y&T ͘RRuocHb"MA؞*_?| ez¬}\uUbʪTDi]T3NrկQ֩Xt`]\nuNo5rnq :(zoFhnȺn PTda;^VTIcg2&RDrhGC-=z`m ;ug2ʰyyd TLeGhWdlPfȱMЋU]XSv&]ߤ|(ҝ(=~([39ah4:!Uaהt3IwŬ4pLH1vE.!2} /u0c歯z.Y}W_n]%TS#!(D$t'TRV\4V`V9Tfoj1XMyPk0jXFo3))gT)^kVn] Dsn)_6 SR.ʈ:m48e$jMO5]`ё^Lꢳyd`Pi2iE$J-jIKMT`ײʠ%T-PNb3qLc!uw:H|^OFni]#t)wf{2("[EaLVVŢI INJE U-'ʔcbЩnɃ)loXy2ɛ0޵`ghU$,85Nr>?/ȯ}5־/ nb!ۮ-dY*gL^ f6(~T"!uh1 ˝,R?,q.cD2f]fփZкdpJI) P El%6چ$Q|DNZrڇlY|sؒjkdF(KhM$[s@:Ū_gL {cF@&!h*S0±JMٓU 4{'xhrUȦ2(slH)ҎߚVYȖFj4ISؘp"N'LM> HSG9UdvQZֆh) PLMBr2Xe0/1CB] uklg,a`XBpeDkF,|Iԥ%Gj ^GU?1ջ4ؐtL)B06 !!DdTJ6ED\ZŨcCұ%JyWkv>]ln޳4n~@#R\&ץt tHT/]:m(Fo!kiz#H2TG[% 0 2`M,R4kvMQ1ZTҠ$_c=pTv0(?=8evOכU%w^Vz-B%Bw]{}X Jxwv_euǭ篭u<[f!7|x%ft~@7Z?/wޮ9P[|OWkGSc^Moj-nyJWW]y>Knrmep+|uM⫁f(͗n\?ųMm|aEᡘ2-=H` aJe-+ajnA&DdV(~$Rb7;ns}o>;]Ϟo,Wǥj"]izZvgc=W|/yavu:a}>qtse_-c;YϜ_,Vk=/|?&kq &aom?&6Mx6N/W.Kdc볿LF6Jk^M!;闒:\F`O1Rɡ⎵ɱbM@#cpc)a)8jkOQbO1he#xJeVU(ibJ:uNl,p3 &4h1I>T&jR-6F VEZOqCO<2WN<$q~:׭hߵP ͧ.FU}U~U}s/$1ykҵ?_,;_{[kۂB "H"ajXE4F*}rR\ =n^'H71hOu2%! / pEZk`z)y1V+`Ɵ$6E+lwox>.η'7>7ŦW~g/:t)R@GQ >d΅Pc*V EQcDGA:WN 0Dt*y ^cT(1<}RxK8b)NeSQ5CzzJZ$]%*P9MR !LbCB9 ϜrL>>jvwwo=vpj%@/.w) hrŀ^!FY"ɔOfK:00~d6vٟjf]dIe byh'YzUw/&7ܓ^H&n<5p1)]Rd6r֑k*ۅ|Q:'XɄ2sys1.琓͆5=Ov"6$;&œJ&oVL"t6mW߆n6-u=>j~y_6o0[;oqlY=uN7t`u5Xw8}uFNGwn۰o7Q!U_X>,K4NQ<),_y{dR[F_,>"=g}uGZ;#Ilq(޳׈쇜 E2d[1߯z8HCQPK4kwz,k*nQ7c4Ny0T[?9./a0>(/Y}ZQo<bWw~y`gt 4W? Uޙ)<>%iR5|񱢒~@%5X`4.R4tzMtx;)"u42ӟ@b!Z)7[l/c|(럫@'|y 22M wejs><<-FlM`eqDPp$S4 Z b 8YgƯǼuIOI ꞹZbcŽN.9#?цW#{m637=9c`NQ|uVP$LY)) D$x.3R: !'Ƃ‘/8;-ľhZzi=N*#^*,"E NsGc0F#gDyTjGI9JFnƂO?t> ˁ,Uv송oreIk!ȣeInr3PJ2[\9M4,x饶%TRJF03F!ķ9͚3W$>b4۹Uw *fLV;YS?,z[e =i)Fg#;ԮoŖ.,|GV#d=fj?~OcI)WŇB !>̓%f $̸ʕ[&Fn=E?+tB@dy0G c2R>Vc 9bxmj|B|?tY?T`gE-6`M@ZX.,]XN`HW0!]w\(VٱXf'e_>WpU M%HY}9k>7ӹ.1M$!ʨ<Zo&B٫pVF7~ G;խIf?+wݯ'o./N畳%(sۏdn9Gl/5&/BM#)8G:o4rqчQI^t:tcvϻ+^N8*AGdӨMJLRi]7ihNQW}Swe4r_'5O*]6}_W x|R;.\EUz%L9 6?#^r R5Vx :%uN&uDžOIǯz:~|scLᏇ޼o.q@{&ׇYhƛM[ch[ƸR T(1םJ(!6SZˇ&1zkF~7]%?+à z)Al7G1ꈲTboAJ('!x`#`"$>ŠSnyJ[p*jSm/b9igo5]p+[ڕ\XjTBJ)`)$$, 1ۤFb ~y%BSFIhFyD\+o0AHQ{#V`'CzR1F: lp64 wE%KRUe=5r(pC OTw+R&rjiw˩!Q#<uߨ vUiǬ=vDzkڑ>L1l4]TpIT #*)`%3Z2+Szr4#4{G[/Hommݞ(9nMAo"9|04\Yo=_x |K'g8|3EmJ woA~6Ozmw"kYQ^o2̹5_sɺݡm 6kZOiTak¢,4ۂ"gvP% S ߂>+[ n(CDhC5P0g kuyCD w12q4حlO-)LgΡc{OY楔kň8f%;FFM@;(\2 keaZ{F\Tp&] ] ZyD7\gM]8MDLF$ _|Z)[{S& /_&/_2JoeGan}t9 #GozlکaT9L Q ŠϪokeϼḶ{=TnW op6*-кIv7LlZ^Jxv] ~NnQ헕㮘tީ]t 5}Z ǩ? P{[LIbzAf~\J"ɐZOmjB2 Sa0L)0e Ô.2 Sδ0L)o Ôa2 Sa0L)0e Ôa2 Sa0L)0ele Ôa2 Sa0L9)\g3=Kg5훶 WƂԨ(@3ZHaF#ThBP48rI!lU;&X`I'{F"`"RS#FJH8 (e5r֌YJ0q\Ut)v:8[0<̃iDsD:}i x<쌒i X#6P>6TDA EЄm%V$B6%`^U:-fR)cak\ fp- 1LZEO<ߚASEJP ZQ"D9F؋ ep].&!MeӊIqOn0f-qĒs=6rpgCG3JHpdlrR:(JCĉyF$0}ŬEϮ8$e.muB4 %} !nt xP,\|b8pЎ,,Ck+˽YiJm-žrb0T>E`&MOZ[ŴOjo:L"C +7S[L3(;*>LRu0\^TKH$Xh.{Ũ>_USuN'wQ'_…|m6)E<>RSK(x'<NgD%RZ !/ V}=7kfnWu0VNZGUoJ,v:~ekz ƳZI?^ugn9mX("Jaf[X;?*~j'\U?*귝ܦ ^:U*B%J EhXʿ`_{u~~,uwi RLb{YGUbRWBk/U?>C{= aY%3F?Y2Tn~(ip`ԩ6|r4u]_:E ` 8 }%P~5 ٳ_&@8MI_\u a/Ahz&Cw  5Pu0i FQ=6U(㦈 ^.v"W|lKdʌK}s9Iuee믘1x[&6;sWPT"~ޡY|Gm 0u?`]72Ip@۟5LP=+yGgƯǼuIOW]$gV 45.wu6kf>ef)sNWg8!h#SV C "(" 3 5xNaBc剱pdK+NE o/Z+۸ޜHʦ~"E NsGc0F#gDyTjGISޭ Fآ^jȉ^F~PJҷI9KF=VtV,=d `̕HD^j[Jő/eBc(32isؖ27e YaKFZaځ*GW(Ld$"*a9&wJ s6wgxs~}ճ~z_OIm"0Vp{epd CT}bVQ  :mN\3o9[evNehc;uݒm']h{ưA/L/G.O}0--c1&NZ0`y LqeX,C2N,h}V onKFJϡZ IUp4/8zƌƁZ1"V{ NVFES- E4ۧg#اɔX_&'庁\x+*iGo]ϪzhM_4;Hm\ ٻ8WU]&@8g$1\K_%J쒲uA{gwIJh+6 tU?OwU=gM\r a> ?\uu6ǗEDb㝵_}ǒ$ AC @Ji+:1k ]>ɦ g.IvQ:BUhtP**_ZaYZ/53gQ txK[o|7ʃ#@wO<@Ϫ0[?A3yo Ʈx޲Wi=ճZUGqG~>x?%_y^% ?t?kՑGz!~Vju%h~,pq^Ĝk<GC={]  /|b)>vNwm>wI-cy^?c7ذv7r_ _:_lIFkl / f?luhV8Ϥo^08x~u]?Xӳ;՛՜ŋq`??Ph2"A P|a֟ʠEQ%';3>6#:#~F\{N_OKODz%_ ʾSH-cb@@T5diq> ߫ě4Bf;b3 `+XtRᦡhx7j$!SfF*19*‚ ])ʐ!QFQ %g ʒЗNm6FٳPakȞsAyC)!d,)*a6ɂ.J+'d% /Tm6p p:f(.fU֟R"bIJZL-7Xyl8B3?RYl<NjMd۾7cr\'g/)xJ>< {`"HD~ߌ52Z<:Oj5ZΓwed(qRH?z٥K9Skzh8&%lLm!Cxic6k tDy6&;Jl)|$ž"pEm硪 LR!razM\kbcTٹslYN]e- f"2CM9;wzkqa,or|7OMDiiQCb ipFe z筧K,9ދǫFZུ1d Ȑ )4l)R,V'jV>8{jm"~:9-[m=SiTj>oaVJ7f}|~~6̅v``LE*KrLS h ˚ydr0"kp砜BIǐ $MAPBżJ dҚ$ BH!e_d2NЅDVfi\7$63g6p1?:g+#fU+lw=O>_]YuQ(ʷlb"jBE[#*m1ʜ|( FMUa"Q;'3;nL)63mTQKNU" hfng=?_Mkumwsef^љ1K-,GY Rd5WcQ⛨RdȮ*E:MO/p>i yrw>A*B?X4iBIAJLZ&^~7 ɑILlU|D/qNTg!'@0K>SX'BqB#::4" 02.)gkAaV"Ysؚ2gWqq~z? p^UA¬Eʅl K.k@-3LE@IQ*+1sF! I2H&o٩ŷYtKd\-ޅԺY3s\tODgQ$)rĮCAT^fo*A)F@닪=}*Nx!Slze=b+cCD[|D49`}dmᏠN}UˉMhj߈4-+q` gؔ|LJ#QW:Lv֊@{SP Բd[5ts'}׻6{e1^<1Bm?L>1P)E Z4&w6ռ7[ C@yUmcPU:죎A2"BY]|wk)tOF:E mK5g5jzմy|G=\ _^]fX*ƻՆyqsv4 ֭W9yW {\֙CM]|{ ~➫*&20z=4׷>u_B|ese뷯{HQ̭sGkG ܏/w`@ HzEc$_T -Z E!dkӴIAOX$&Aft)(S@I@2;Ya.O؝Sq"bsu1m8Wp Vy[M Utmv`R%B"_CMbSrҲmkenS"*|~oro}kza0Z~L,{/1īrO*t^ u[zJ3釭*k->P%Yy*aWT^ s Q|4᪚k᱄j[Ɉ1\YW/H ~<~~~1ן~z6[&Ľz e\\9k_[j-./==LniS8Bg >ׄ=Wg~K4<>g,oJ1z ?~PB}Ety>[;Q`WC^PӫgY`€@! h(Y 1 %EWo]vUɺzsܮ7w՛zsWo]՛zsWoۺzsWo]7w2i' x1d5'g_~u.uxꥒ/ _pFwˇ߶PVR.ZEhu]# Iս/Hoٛwv.ZEhu]Vw,gĈDKa{y>II<֊ | *yۢ^ͿەfA.>pGcMy卟1~ W4|o(go-l1!HLSeZVm 8iS> 陂%E2sbIp:>ڃҐܻ6@)QZW۠QbvaJ#,@H ekA;܁HßQ7DXqlydtK7$#cX3P*k c&Ij[$B 錤TmUӦh-8FgQ)z'ZAEwM@Rj΄f$ΠQF4D;MM.x뜡DB0Mnp2m؁:[UXEL16\_ǖf92g-ί7I,2mf [x{ЃᤄY[*mZƹr(6mAgo}&imG8\]LGo:,Gnvރ|;x3Xo͞w6ݮyGm{;֭=Um0z_mNφn[ c3ft|qGQn# -֕DD-m돗Z7`WZ#`:IQ(ʜw.jE2ᰍt~pwڱ}pFs%ꃣFhsx4<1p6p@v@&+62^ 3j.P፧:c\k\E'ZZFb<5l7 S|H]f&Ùⷰ;WVCGS-NIϚcCǭڄz5uY&,*1DTE:}0$$(H9wX# cC*1Ʀ@)zw_W[-ƻzkk^oj.^T!60/=.nt~VdekjwͫHGOA/'/|0)* d$/:_ᄏ?!Tnb8=:ަPRQ3WO}GL6UM&SɨkHȆsOp8e-KxwdIG%DhDh-X. Qf;9 kne;`0n9%Z3o iy#0&ˤV$5Ǚ*H\@D  5A֟ gRXfAKQ1xΒj=k:(G9*]Ck"E^E햁 ~̅({/=. nywdxw⏫ՔjeM!^ !FO+X߶΂qp7H AsN]&bR(<uaḘU Ķg~{%Z4>͌ne1g͛E Vha[i7ѹ \ƨ<gMD d:u51p2>j[v=ZtB^+>&=V$x:3.r#" !EMw.KI9Ǚa V"E"dZ[gm"VI hǥ@AB7DF'`H`T ʅQki8.C$~>#05yEB<[ 8bsZn=F9뱚q\VD¸f6 =#IBX@ 6{3L7ȓoV4#xEhZJOM32qm"AE93Q3 #ňv =rOxpɓV4"yn}/1rOp?x\Kh'M|$c1МHAD"7>ק'$e85t,n|0L_2jQóM(YڄG)EM[v]YlH_jBgUgYD4G*I!(B!HrV#4pAZǂ`[1b.Qi9mUv .K,q)hYGv*^k98鼞>,ds*&VHx^rɨs*hpus iiȓN«Y>LĞ|N'}&@,/@NLvHK`C ?xۯ3w{cͧv{͚2,UyqoI7%$loxa´I),N}Z(o wsrwrG ӚJ=;i8 nrQZl} cJM ^y]hC@eƩ@;QXsl/^?1o.srL܋Gz'"?;ůe_ĤG"/?#/^/"J/U+;AE;+{sT7̒!7F'%iwo|ؖmWNt)O(7*fO^z?%O˔A?a`7gs 벗/K3x?ٺW9<&w hXL|rֿ-)u(wr,tv~}2~Y]D/~U47R: VA_eiPn(_4}V{fcƨ]\ P/tytq GJ{F*;:[A[ݢl 1o1)TőYݳ7[hiŲ`g$j^]6)3Lm:MlUc h 11L z묵!hmrTf*'ӋjY?:D=1Bp J;D@Q$IΕN"XmR3B/SO8N~Ǻ%77ͽ~v/7rXVc(Z6zJ9]?͒~,i#ceIWnbسX(@-%/ ߲&2A(2;"ZP\,I5l9 32wڱ=e0dOӐr$* ƣ*$Jft` LZ``IF #ol+ڷ|Y=I.+Lz+4D$JIjЖsPi.%[ש5}^TZ!1iýQN]oٮB3|fnEpkN2ꢠ &`z;uV=W8%w5l=~~ԻV3[5 lxD4&&aFP "PB;c@RT)ިnNY 2FiYmsv2D&‹ߡKS0h@47ΖH%x!͘t}W+ ĭ =ޮ_)/C7:9WxAnc۠h$ )Ryto~˨Ċ98^:A N$7J/:~07YmgO.gJFɃLWIi 6ъ"ƑF%A 8Mدf3VA\J9$19Hu͎o\.hGV --? orEKs5mHI[,jK䪉{$$(U\J֚e]x;å&!<lbgY0u ZЙG; SW S Zs٘="q;1be|`9)0Z:>qJI{J^=I*pt:pD59E%hHΕrdxz7G ˣ3El$Z@J8j=!HtBy<] 7!]VX齱?S*w'б~7-IK_~[L<9mK>dn/ǎVpK$0=_(»b2jtg:<]w僗WEl2Z#siw=-7{(BRx5)f^|s{5R7ֵ3VwfNrvEbybPʨ4;~f9ɚiFjgedEZVF%eR#pS t~z}~Us3 F,k~1m?t+\`tYe+uX 8'$F>/jR5v74wkشe PX?}2?~٫o{~?W(gߜ%ޅs)jPk(f8ljaE6|yir5\j9_Rh!vSKf6{k$w_@ս2c?k+ +uL;hp̛%Jo^{E&FL**9~D‰qR;ޑ$,JV<$pƢl2&8w ^J sX8CCXj;/YPMDLM|ȫ"z} ֝E:`o4V Ю׍/8B j]ЇEm 'AQqDeӶ gpF(x2&"atV%IjϨWQ=)Rg-ךROmZ9R 圻_'-*RӮa=1tYqG }">ʩy#,{X,!/ǎ?/㵛b˱ՎJ;t:/Y̳齔U$U_B>x\zӲ/ʿbl]oGW} .#wuW $dsnw觥]ÔGd"-°%sUzLwU嫳R/nm&V\GMW㻝JIXގN d?~N"3s[wRŠG-e 2)zڻ{,į:Cgogٗsg>h\%Iɵq"][[c΂uK|Hӵ5 Sb>Gjί8}w9Z XM{Ծ&3}K l;  އIe7f|ey%>w nނ9z\kg53.5FzJۼջjr͕ZXN0xm> :SPKP}s}"z.g_NikD.) d鑤2LQ,$mĘHsYm -r(a~%'~.Ǘ0Fŧ8q"Scjo[>v-lm~]u$ t%yoo5]XܹU;rIC#ٿ- \o]SJl`)+lHl[ H{XSdBU7"m /ip$/*uQK#t7yްLzG{v=p_$;gV?Y9Aeb3RDƢ IE6bY']b2j^H \邰.E%Ih3ʘiHx)2^_f"׮zS(ғW>{χfBFc Kׇzhr="=w٪ntpk`ofAytmj{t fR*E$!A ( JL#)_%o! %B3LgGݛi%,=C=~M+r3ݢoQ:8 c|mLVg[iE.)ѿMS}kG= hvu"rh"go~l_eJrJ#)=RWiDJz\}^Xڻ8yNzVIbMLKjWe^}B BP?ۈ?'4n |q>vbEJӟoaEsE{RjK聟R<{(ZH^=&#)Sò_r2EԨm=W (b2Hl9f& BedPLxJOTIOB767t'MӋ_ޝ~&/$ֳASerf^^u1Aͫ]~nT(S]7 Eh\nض4Ҽ Jx1Gv:d畆B\(( 9S8 L(Q142\ru5)ҢJKI 3ހ-UyYVg9YJ{*YVI>wjuǩߩNdi%8RE!icE<3RI+(M S0+,iKM;Pن$[L =)B c(|2z EfEMke(#7kN%'v4ߵ:>XFLN3r Oj;NѮcsQY1&fHݍ/UكhE q`HYV)[Ix"2:ǘ}Q4GF,`:fR9NQ@bij#c5r6#c=_bJ-X',+.)*sϸb%s$Ϧg g:1 (xJR3J"dmHQe/MאYu[$8)s3=/b$/)( 6L36s !1~T͈Gq%<Ԯ&'>1AsEfC6I\ ^+uϸK7eP}.x#)B& &7<)Ju<YbJW۠~ݵ b5EeD'DsHi[Rxs^ Y_/J{q7\+ٗOWE>,_Vus4%xɩ-V ˰x+oq|ن)`iҭXP]CM#=zuC[J v趺ӂ*D&<~//i'%ޠ`ZbN&D'1"̈́"FJ1`3 ŋ݈|xr|3{SƏ 7%}Zr7=uoI4[gp_τf!(uVQ8sÙ.u1z=#")Fl~3Жʵ:3VǫfT.x=~Ɔ yӶdBh3}<@ČMp)|5' +yj'*@g-cGRGBȭ\HJl ~@~N3zG:%uW燝]O:XR݊.}nܸѥD4օ䮵}˃Kۚ/=`l e}Q ]I,UXbk'4`|wl}iՋr˨iG@Մ`ZWr {1m"g+#ۑh|qEpۦu^"L %Yɬ4P-YrVmwIUX9Ѳ?qR<KaAaA}۳tؕ_t>#a7<`›G.7/9yԢxϣr!lglmGWD<*\jQ:\*5;kHaƭҔ4F۽HigduzuAs{ @g :=/f-MK7 Áac!› zZR+aBIw_S["82}ip7.\N:F=F6[a{`J{wuU^ʾ}&W7W{uIj*q%pfH+0XծΖ-om)#HzQ2K*f}[:0a;qVY?/̚RnX[,C ^rik{GMpBpireDt"E%OdVƁ %cLIt ɇKjg8 Wh6̒r r)FqVb2k5Iciz^y IA !Jk+˵&zmw$EFK.>ςnY\vv` zfo'1 bD2$Қs 6것I#[I]f)CbKAjnPդFOkVslAK%K9KB,yH&/ړ'%F+KekI'I,m"!*IL˘9W@L>fAY};ǔZv aD s*|ϊ1eZXAZf i⑤qLKU 2ƇD MZ%9ڡV4g%'CcXmI!ށ \g#2[Oӛ A i 4$Pz'mPIOm8KJI Y.@8 j,aYX ƒwA0Z@&kja 3qY,ɎJGdepƑY';5т\BKɵ5eB52)$, _"I 0:&L*&.K~&' ZX'P BY@ "gĴ\e Hs^{HrXF 1*gAdB8YGFPuLd]Lv$Μ&{*y4:keEFHHYXԞl #IΑeSicȷ ]R=c ?^ׂUkez|ou L mZI|txIŁKy@xc#T~:Ut4u}H;*#d`&ӡ !'`Ge %tpΠ<(] HmFd^1P>xmBL)tq7X1FZ)(^B>yt_$kIu3u<o /x.^,TGW>AXżۂj٠.+ HbC"oߞn~~ӫ̻P}{ۣDt(ʠvPJ5K6G*Z꜄ !, ";Hm] !ݫf bھNZXdN{=zMw(=uGm(mWF(Э_uF< \T"m5fSڬUmo "&b96#;ƒf5$Y.3ބ贛Pb}'t\f !Kkzx5y[O]gF3]pF zm_mֹU5ګ1EU4"GcV/^l}f&CEYwQNޜ%=ͦ΅҃隓8ڌ<zbG}=N ' Z;R:v28v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; D:M 'І 1@eb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vp@ݲ3؄z@D5N ?(J:'d@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nu)TTZT"V"!Z+&&T@p%qP[za@[%u(7t1;}E5Z] ΄[=z'f/bz+OT֟Tp3̅8VxA Plb(6@ Plb(6@ Plb(6@ Plb(6@ Plb(6@ Plbu8[oW͎^VSjlo_o ;]|Ґp5N\kmk)ߚZQot<= Zbe:OC ]P;G ǫJ1Yg譟ݨ _.gKͪϚ`[c ؼ~pհaizuD}/O6}] .,-7QKc);7Yܮ79nM>R ]|f?}9zbUzN'] #C"Mߘ,b^QʗQv(cO [ZTcir \m-A$^:ٌ,-؋Exۘ]S7d]I}1G+'lMgTj*Bk+B$W4X9ky`atE(#?0қp*k+BPjtutUU=zը+GPn*3r"U"BWS'4\ "]#,폮 U ]\)k+B&gBup*J ]] 3@]|O?[S(+/(@姒6^])P`}z8͗ R|y4Ë^!3Go}Y2zhcFFcCnZ;'Jmi>-3/k~# up=X\u*f}hlo٭}G@rQ4)Jz@mimƯ_U ѲPw6ףfӂk7K z=A ^ yٛ㱁`Э? ]>laVߏTz{,S<;]xf>&qh3Rơ[#2]=ty=Q+U5tZZ~=2+Biҕ'ozOVs)-;=n =`sihA`ol+W^+;H:@UQn JբS uAmkZcܪ+k+BPztuteWDWl/"BW@kWeá+kvm?}"WCW7Z hpS+B)=ҕs"J]]g֍+$f t;DMX ]'}ͣɫ+BX]"]/MU ~+Y"akW2R!UV6@P\)QWV+wϡw(ZiTcMQ r85hhAh3jd]kي5aã\ϱc#O0*9WS\jS1\`%sBTDW7*S ]*NeLWHW& eEts[p_'Jk!ҕ;S]p+Z%'_ (\tJմvEnjNWr:LWCWAӺ8S{!.zMh>0Q ~5ֳay*NQ#M#[ d#XTE[fj5JPN\=M%gW[na-"VϼNzQz㵦ӫ-.N>|5>fW~M{^LOi7@i G1Fئ-!5d[g7W_̫.л'^wQpoE7Doh$շ`:;zp];\@tWgnhGt{m.skzuChko/A|%{iz|ӛ|3,۝ݗ\)_6Iš:IUV_t{q/l ]J]Iw]/CV2Ɵ\WgZa`tY;Y;s,n^dA,Lۅ(b2^6Ή.EJоf ƞe?fy  iыq/j.Wa1F\G&シWH7/?;1 cC uZ\i{~rz{mV__r={>]8A׏<4jt! ޕ'+|kaBօe9{\-"}&mDhVs#akm#G69Ilbf W[YRԒO~-Yղ,l% c[M&YůUE!Ѻ#5 {ddKljF*el/7z9t?.ٷNE4Kple͍ iwOɕfmgsja*n[L-Üҥtڥ! O硟^1?>g߆A. |5Z9?ԟ9$h'._vKPci0X-x]ndqcӻCc0}F:,WoW;Fno{DHK!=wdHkaR>/G}-9 sY04\`D$4{7C=JpQ[lbqM`,akQ`7D( %Qj’E*LH|BH0@ V [Fǭs7:.Wܔv*עPy]@ X {{h֫}iqx8D8f(,`1m<cP9k%1 h(*\{rwI0`QU:-fR)cak\ fp- 1LZ,j{CZ+v$! i%74 yw~7g孕jn5PyY9Pʔlp¸xO7NQ<"*4y/xVH0Lz)x gq4z"Vf@܏5;Ym^uΗoe8j3,D;9p8#"C(ДPwVpJunowhxza$̍4xh)N84Yl`>:_Z#b6Xk7=۫ߝ&zxP-LXۍd|1aqt_°7JGY[bM0ҩ\$80(VzW.BT~ҿuv~>~Mwv4ܠA*(t{T~d׃|^ /I@²֋%3F_ܷz,k*nQWOy0K?;~}Yml旰NV7Ib۔.`yʯ3+;s%K^M3 Ew8_u ශ?黲L, L{ck\p AKaJcE"~@ !ĘaTduOt~:W&x:΋oo`3Pw,xN&"9NLq\rIb4_10@-P+(|1֡Z:{4XkLlйs{Ơ_y-eVՊj|aĽN&9%?цUcoj32SExiGgSqBh#SV Cr"(" 3ʅqcWJ'7!X{K+FbǼ-ґY ?੢^9I?&XCL!ƳZ Y+ST )8/E{FI`QknRQ˹#1f3d|ԎK+ųүQu`Fkn&] R2ç[ą;/禘b:a A6i}9VNZoQ7.}۞~(i)kb>~%-c.g~9M4,x9!W\}>"isؖ"7e IÖ((Hô(G@W(Ld$"*a9&wJ qۈ!ݟB7o:Gv~*86x (+D8ý282I!ϰ]bV1H0`ۖkFu@VYwvSXO]u']ЄN90q+耋.\aBXt#F:f=uZWx8;n2=z 7 fF>`HFFZGOׄ t}ʙ seCbV9+ %l4]el9k*سuC6B;Jt][+|u {ܴx9}yK;7Lb l&чWL+X$$,eZ ãC'xa5ei)TDW) J-ZI  0B'Lܭq*bB?Tf 3ZN\j  bz !#L)%W$Biq(q)*P{ƈ%L9d#)k")$)bRs*JsceMxZ.<l`gY\4/1DՂñCV-rWDH5{Q80L<ީUՓ>wU˲d|^?8gKQ1s{ݳjl9C,ɸg j=]4uCbyCPʨA؝vAw5EwAK6 |$%"ﺙpH]sJOד/nQ4_'5/zO F%O&kW7D")4` O =‡8T'47kjƊFWuUqs`_yk^ůO1Q?¿K|6ރIx`=[]&iT5歾u=k+u?nK~݅/E h-ƀզo { ``, *0wHsNj F#R%N) X8CstWdtLĂ#8G #nJ¼dvB2u' p.B$RTcƂBh|:"6"<% ŘSY'CljmMlلtMHof|Q`!K'%po4VT[⨜ziџbF4D/rNϙ11W\G{M@?=4U?FlKJ~41[ }Yθ!GĉvTDHvsh < E-(ZGEdQ07H_Z)"VHDcہs7N /H][o~(鼊<?˺6C'LRq 1ti0O}(LIw F>@Lv*w1|Yc?Uy,@}iCхG`tS*u ?MU' r)~}ꗇQKv?J>B!/J@Y%S{93Ku*\_[i~@`Je8V(g5ȳa97@ e啲vTo?/CYx.yۛu5-͆eqK>JvͫT>Yڞ3{R?Kytj̮Y:l8\v_A .LЕgAŤGկjq^")ZP EGr-4^*bO*T윒,wHI;$cPuqdeG(VZD\xX2   ,G%馌U%Y`B<UX3LX[1{-#chnD$FΚ+E.eN>3cuLN{#`EqJӈHq52彡X[4Ov IM*Q7/ qfI$ \roDʵ̨s7bvftjgNBZ'zxtU:'5To&ǥrP_˖ti7$]$ jU*T UvWB[)Ŗ3MKi\p E Ԫv| %&F3 ()& zSNq#7 ,.ږsdlﶥ,$-B[fvxqCngj{ȑ_;⛁`? n7|<|MT>ՓWs~#)hԁ(222hέ.9ZAb>>jB$U(kTN&+3fgɁ&eP‘QjG Wg?ʳ^OY^WnS>qYǏ#ɉs5. #7ޅy&h"e6<]J<'m DFf!93;+ǤM!]c{mS=%zTSkr]"$OiSN8Ơ֌(4D.XxUnh$UI*{ed-ʹ&q-=: WCK.էoNw2[V PCed7(KNkIIhi.ʱӜ,92&d,F,M[JZgcz˭fCgXpog&dMZ(ЀYhrw,)$q t)KS-T3Dz ^J"'s'ܐm[&T @37hڍ}4v%Б>دT\!ٮ'~|nh;Si|Pd,KbC #l1 8NYW2yҖI•q"2y3[2֩V4d_qzZ&1Rm,2@2vHe@BlL&2y$4ҽ+ wC ވMB`ZB<` чd9:Gѿts]Jئn 99a @`裱&I#/G4=QL1`FE+Kĝ7O-x`jz ֍eع~+轰ݍSiORId_Ngvlkdk@c=eJ6'2f}cen,,Ǝ;C^ !z(,. $sb ڪUuLLG0ɪ ]PjSVrv Zpj]JogՒBf hq>yr!ˬ}& k~]אfcWILWJd斯wV͑H02sv]>pl~Ĭ4y{+]:ӫK>'7 ,:&u=L\헟q].]&w>V])&H-ORv'-] G%4Pf'@\ t1 ]4O׷/rʼnݭѼbu1u~vMf0om,jyw0 @]OF9^ ~<ĕ 2ހ>vf@8_OnE?.O ^x8Lf5+G5tlx.$Mځ!CkhQ 4bvq%=\#{>"lC`iK "Q-yrL&!Hnlj%v79 % xxnVC3/8$eDڲI2$d66ڤ04V3"fNڑdum)4\j .%EF 7$wZr/qY*-GY]V8;&!K NF$4H8;"kH1eܹ IXK!I_tYB:LUX# 9` :y tps$_$m]No5nI]LoF.٘Z'UdCTAAH%KɲU!mK66Ia9M:nC_mmVٯذ^<IJ{ϯr_)7MV5(j˲QRy "WKE:Z*=]"P$8(ɘ.xF\P T2KÓ͂cKĀ37x6yo1,F[\ gW%Ij*il;h/k$>qcaor˵lf{Ж$Q{5@7|j[IldzfD*i":*jq*pUpERj&zzp VbW$0dઈTHDHWWF'WE̞ \i:\)*ڰ+霊-ʓ"p*R~qKAI` WOi^WOre{peW}\:*Z<"iUre{W$_ٹZX=i p*7mǕ6hc"07L^Hqௌ l4}&.PBr}z~ ]göUԍ 5(!Gh` ƣ2fQw6f`??}- sTN|\6? /šR-k&Bd*㗟ޞ%!A ( JXN!wysucHv Asçt6X {qĥIFD^泰trs#ZUf ውh,5T֩!J Bh!O3pセk/g!f:=[$*y{_p#7)u?7ʹn+Eϗ+ dmP4h|B,t2gY5Q:WlOGJ,"kAH,$^:`V9fBZ3;j4($I>b]V)`=tS1$4&%jF)](vKb%*#"Qv+a[okQ3{I|sBX"qnV^+˫JmS6ʄVʄD%K%>:  Y/OT6GU 0 f#%uNF{` 'ae1*KM6tۚ#^wrv46 iK9s9GaZHdwR3[#DKHU9CQZ!q#'N;2A"l;&M"s2dA lYZ#gO9S[ݝ[.L^8HX^@N , &Fr,0ʋJ'a6EB / G(%-xϹ֛F)Ľ}ua}x ZAp5NzŽa݁ޕĤC2LF=rMP~E%R<&HX'X8ZQtu=NQWi?0fLK}vKW1 <ǃ<ɛ#oZ>۴2\$&H!#C#PK vnڍ0d[iyDZmדD?nJ-_k,r-#!|~N4ɹIs8v` S< w%Ncģ=XES8P-KӲlk@CzȢ߬B< :{duMJ!o?d(Czp9lO孔 *$*%Yժ  ulMc!ꀲl;%ҿ]]aޯ"?lޒCi9<ǃ$kJ&%)1GOV -x4A1D=1 7BMnZA=\ug(t5LKiat=BWvjQeh\Do{oK}igkp*5h>r8Ib=]ih}v D7iLYbUEYhrEJm'J8]f]m@ eI8I)'oTVQ 3ɸMʄgnjՓ(2$%Y,90hTdҦF A&vև-{[#gfOF_oO\\os']?lE{C}#Cr6fqSR|ˆXahFq%:X!xrrNҥYXnp\g2Й87!$Br: Ck2z:$T9Z#gY@KؼGbkUUzbֻt]YdYJk߲ܧw[d,XJc|ɔ\Իq?Y~mg/v|U0jwA'_jl2dNMP cdNx2`N;-SZ`wd6x&y|t|SjF&M fWN΄y&F Xlߒ S}ٿ' Mf³ TGGxV+6V"f{Wr*=:'ttĻ8}>:ec`CΕQuN/Xan01gshLv.,0+&8Hyx< )Sm_x[F_ap=p`O+(߫u,p3̒Htt%#9S@K,#Q0 H:veEYd$@B-e)ȑv'`QC9{Z!H')'$)uTZ9f t^ qLfmX]ԤbB`nġA5W,fαPj&ڂ2mƒOd،HHFnK-tZT,W5cTV'c$cRZ Gm"cG&I2"iZ%YXBp0:;%0IE~D)k?tDîMh8:w3 VF4~T6|j[p) )l+e]m3<*alw' OrcP$71c1xm F0%a*eH?IA$PzDlJg e]tuF<[c$tw;oy^b+Nf(xH/a؆iSۀRlfi b:Lx<-{1m `^JgC@So\ky&TxʻDEF'H(2RL0J00 N_Iݵ˭suxb;ng6=76L^XmͶ3 0:ZYV7mQZI0J(+/V3at8)0]iӓ7Z& EQ03c,#rPN;s٥UDA0]`VeiBʦ"0Q wі}Vi)F\ ZvFn>II8"ph͆ؐMUqBvАͰF |/f5O">ܬWWJ@ Í[!8Uf%,H5PLsNxO' o^^iꈰ ͂8AK6NfŮ7kmg<}#,޺}CܾvYGqiڈrUbX U*mmdWRu}GZ6c]ꋬA :GԀ9$irf B| A 4Nn@"B];cS7WK6AoGS|~ݨegڡ/joW*tr@ˬeN,O$b  ޗ[&gdK ihKCbbQ+3)hDn1C2W<)!+I+mx:GZ+48! @9ۜsYfJH>\;(ӭ4bwfJ^isq Xœ 0 Al yk@kBi(c2Ȅ6#Dڑ;ӎM.c>6N6dޙ,%&RL2Gʈx ^@r1slUyZ<c9F(g/^Pl ,D$.;GSRjh܅RR9- 2q`QiOO}-b? "Qmˮw8Ŵ?-6xf)BS 앫qQ'd+fs18x#O[$gK9~D˘zPrL&tb/OF{~>y+]3z.?p Ͽm!)^}? Ӗ \ 2QTVVm!qIʓ.i&q[ lSUQ_U}of1ݺ/ ';w-L`Ȗw+-j'BER%(H%6:XS1NzcjqW:-ScңqJ: ٴ#󐽵MKZpI`(Jun PN1R$QٜH$QU`cJI`\qYmst2D*ƒߡ )T Z-a2kߌdK'rgWmφ;:)*%wV7MRIC"ܴ`CoW5Kg|W۾Q=ێC_);Y]y8+P0qs+L>jb}_IRR˅&/wo==ͻcoG~g_p^2 ౷7o_4G?l4M5-foѴh׋ߢ]ev~]&\BE!e)}iaq_(Q혂7Q)KHo^И/&P*HL86Nj-OnqXKYmt8kyԐ6uKc1ĘKFFf!ِ_PՁQT.kȋVE 5&v"h!5}_ӍNg[އy|T?ZЮ:׍/8"rJc.;A颶\p-b@ aFaKڍvk7!(9\CIDsk 2gtJ xbדX Ez\OLmV[y4^y]B.I>B49ޗOx&XhEq8woB6M}=7ǓO ͒+a\t7u/,(Y|&=Ard On׮7feQ[-5YTö.mqszFկ+QoH8q}LdyOQO!@=]]]pP|}P[^oE¡G?ijr/(>:'?>s_L~pyC+R+$q_T$a,G¬YuN]v_wklTR~ń/Nm;tquipنQjXY xG]p-.TQD-5#$&Cl8࿨1W7 Vi_ \bUȤ8N~f…_>ƈGLWv<Ҏ䣱8m@ JX,֬䖸YGK𧓞8nz5t5tQW =~cϚ7CI9'Ir !5, mQRF]RQe3}z&PƄHe#Id V:",5I18oП|.Ef֗xwrd54ҸƳWՄdgQvKMM$5Jd)((؄ZHDy EI'ZNf'OGM'ӈ=R’4/UNsx9!YNd)$"L&EDKH5n!Y*]cVfkS[_OOuK1IĤFE̳^~UUPK=vTvRtրId>yZp<JXOEBN&['uIR';<=e(l9K K^[*5H-3U@TYۣ(Eh8Foz&Q X `Y#T6,u6g@;)MH{RS#? 0,nE *mXB P݉20ĭ&FHzs ͙_@HG4 R>rDQ&֨9R=sFUrb)n8O/[ :z|zYe-n_i^_vWp}Tu-_^[FY-JBV xZB2 nX𹼎$cD&"sRBryY<tH (ROkB-l :%nU $4ľifl ؜`ɆVƮ\HBQp(m~% 5WꋕO`f36}J;M>U$RBATO\ ){h,Yڹ\UYf%Ð=(aќ#"N *nGIȍL>"~MaMl k7:ڴg^3Z6+n hisrEkL` ޔBKooRY2JQ,*4RQ1(%!@<*ըSچ18a/kebl #6>veD0#{F;I**kD(2iMFREe ȢQDElB@G8 $(2ϸخŢ5i 3bcpgO!/jU+/y-¸"DDc3h.XAs;!4QG\X{^| ^l>#P*g.gxV[akk:nNp3y?P* ؘΤB@f-2ZFڞ !OS!Lt\.o;]ew2ZCNWRǰtmd(Hmس?Af;p&zZC@< ]=zЕ U}oOWL=% t2` WЮUF+u*THW )\-5z.CqݸXHaqft¬ђC +JOod^p4G89W C *AfF)iᨮ;]-bfM>q.`9R._Acya:d<͇~XT\ԗKι|z>Jכ/x<v*Z.U_bqx~-"ė{ha>^?)-4YG^oUaՋӚ\p9XI_,ه3jŚxpFZrFBiךZ V5ɜ~wm2'21Wrkd hoݹFtHCw M.UvM.lY^{Mi]q% ]e2J!zzt%ݥz ]eBWwW*^"]ItIQ++ "Z)Z "JHW Tyg*5+th25CWyR}JSBHw vFh9i;]e7D2N;>nw|<3ZzR+ dͶ258jz'ggq^4.#~9d:r_` Zu*2Y=Gg:(z?vutXrW 9w%H.J./*$)J©%?Ep/dy9Xq9.MjB84b'qV|Y :b~~^|ёW~`N3W\= 2ɭS3cwQSj4TVUwqH0(~}=F|VN[2ϋ>Sh-rt5Fhܒ0RscdWa3>гś9]?_GXlڍN[l;rm/Oؒ`@xD˟YT{ZGT{Jݮ  A/:@ҼCt5]et2ٟ!HWf%Tq!Q=~3*4#վ4Kp MgF$HDU`8-LU(}>QvL*n+ 1 JtDTg vFhy2$Of2"gLv2Zzq?4U%ҕ!7$b̈|X`[>XJIjvdc_4);YTk@/TaRtpjy)HNW2FWGHW07σZ h=C+E7:F I++jJze9 ӕtUv0AkZJūЕЕJQ3ޝ gwYzPs`f͠+Kz jJjJzst(ِh?é9t|.rSdqɌVS Yy|\ߡ5ad?g=s}̶~-Y/[ 5ȁcf$O|f5>N86Scq5-;RBW@/;)J]!]q0̊JzVЕW:])ʸ#]ycBr++VCW WcC+E(f7: _]0ܢ\Z_Jhdzlt]*l\]EȯWBW@;tHq#d0&ڰ @(ow?tz>3g}l;jܴ絫Yh쉮f${`37z#(/ٸRg-C+Ey#+L}zT\vxdzȉs ƜYsڝ+iO_74}}5 g_/薉yxBxY//Ϟ)O@_0oo?GyM?mWW|fN[7ى?/gqĚѭԲ%'S-m2|Wr}@{rq~vۛ|~qf`O/޵'Vc%qLIYȵsEl6Bl%9öGJ.lc'R ]hs%26'NB. …\_;:ۊ#4Rrg,4lAơ&a]` ]PpomΌqhYκ~I;"EKQ*#YnRFy(')>'Ѣk#D<7۷ARM"G> 1֚ l I܍)aN"[F,=yd0]E4Cc>oT7F+Ptlr5=QKƇ> h`n~mɶ4⌱ـ{І@Ao XWB40.;[Pa fsc,c1r#QGTw&)$(oDЈ$6_߾̵}M 1WZ:d %Sc)!:ICާ$`z]5D,xtmhA)=Hn$[3 ^ݳ$dews=Y6w{cGZGwH}f>e Ȩ&OMB%_*X )%$ڙF/*HoI D@K$X'--+JM 9ڊ>dg݄EoGZ`L`J"sE{;RCv4-,#bvt|Y_(P!M>1rI r3b1أ9ԡl i ?ЀԧGO Ḁ7C/iW"Hm!ty![C8e ,&N.еЅɦ1- YZe0\A W> H@RZ6f=ak1Tkw˽AAqUȮ)x]baI(uH#CXeإfC Б$xq%۴$X(F ťEnl o@ez5HDqvL&ZUSJ$szR +Z0ȸCæ|+z],AH:JLh)D[?xxfT[cR\ANŜE' s!!.A 5t9Ԛ4I!0΄#72\+x)lL/իY {s6Y(J892r$XBE{LB"=A/:Br! d|A*I"D5ڽ( EFA#̼cIF5ΐ_.} `D,%fdYҭ0cX}{0H.C@{B>3 0׺~Cw 4tlpBr"h>uU9`J;L'ka /I~ 8ym>eBkO'/ V11Ш;&!ZhX 3 b{PT8xiL`IP˦tdU)q t56b&Zm*d`V`yG a/^,,tA\xϰ ) >@(&912e8ǽ#`4ӥi0Ft%+.CF"-Ux$ ;< cxk`¢JrC#;kt=!V1tv]b=!eDC!$m /90HT&%@_!80n2k ) 1 j+`EۚBp ]fcT;6@n!z1 bZڝ*bE5d$ыax<~,.@GF)LEqEIbe](- A5qwpD5xޜ X8a1;L6`E6 DM Z(U _kutўEwv4F&gA pf5C`m@+z뭙+ ."ИJ@ |4E%Z1$7В}0Wp+7'|yM{sq?pWgE7AK@H7({ 6#hl6Tq-΄ΦAŨѭ:ךcn s-R 2Ę_fhH}EeF٥faRC^"$I˩4ds ]a7fhokq.0сr5wtLvp l )cT'6:R|- Vh+I[nڰ $\F1 /CE7 ޡpNmQHI\: |Ty!Y/xp`\3㨩1dzIA6*fn1<c- R%_u]`3;T;)Z=Jvg=Yeנht57i" nfz!@ DO5!-:Z3Ea2S #+Bfw@S Nz|dNzO^I'aPJ↑duwO4 a4 1_4fs˥ -Ƥ1˱XLԤU\GR! N V3f !KkB.?#t=Bq>JC_`j?.oҍz7>-WWw[w{0#O=)wOs5| F7aNϏOs0=8t eTjMx8y{'zw')6؝\~v͗o]>x0}e>ɿ_޾9ʺpfeD+}?«h͍__?'1Ceж͏|y6dK[ 9yD7;Ν=gPU~aM@@m^a5@@++һ+DJh*Jh*Jh*Jh*Jh*Jh*Jh*Jh*Jh*JgZ|۸0>8|9'#pa.j%'n-ْiZ3f"(""(""(""("""OH .0ڣ@@؟@@v ÕQDD$wD$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDX$`#($P }Aju$P2 @테5"(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""(""#Vb@`p?H IV!KH H~ZDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$PDE$зZTvoh}FNjf]:U}bh  ^>'@5\r%yU\ Ⱦ@B Ŋk/Q]e!Op 7 euP)<^ftL8'uRWZw]]*ybՕ<)c{\싺 >ᵽ TJ7[Ϗ».ğ]]mF0Ϭ6#?33]mF$GuЭB`푺 !{싺 2*PUTW/P],h 9 9Mf袚m'sq&eI5NQǑ6mD |p| )) mL1+Q~TٌǣN(*=)tϙR_=w RݔhؘI M_.]`$4ҏ[Nb4о9v> Ԃ),umSg0H Sձ@|r ͫOa1K] {fs09F<.Wx+ kRaH`@>2aRthW(kB%NxZHIY;`rk'PKw9 Trh r΀`ި@ZoUr΋+qL@F]rl_UV]WW@%ٵQ]=Y#uި@x&⻮s6`TW/H]IöPNS\x l KBW+=lAsHU/ ~-&P_r! 6iRK2Նc$tA&ߛ[j#n >!4f<0ukגX_ߺ9]hD'rH`i~hzyRm3x=]gK5p$RnZ[fk;wQi<*_tq!j$)W.W CVzk0r|_(X$ד.fcv.;;&' Z9*Aw:QW͕`gBRZ;UeDJO_^ܛNaqrujȐr&UnrE EЫ$0ro.VQf;Px;VO2<}\V͓pv :ƊNpǪNzo~<9{{懷g~ᇳoO0g<>;5}y  [6mAuC0dh*CFg]/f)R!t\{J"x!^ĉ߹c2c#-ZK=rG< w44΄BY†*PbaAIK)72Bx7풖.:Z/b=V ytռuw|_D!=$Jc !]]xCLJ9PΥYɬ7+yƃ-wvp]64 hme]MNgm;d{s>#Ӿ_%6墬Vy?z_ԮzPQ @;QoX`&ɦLR.My+r~>jY(Ti)N/ujLrQLV5hη] akiaGkfȚ!ö֍!sAWh@eA3[lيoE"1[CCRÇ9<!IY..l3ߨDqȹNv' yuœn Rѹ?M?jG}r&"ݬ*@kk';K/t>_,l+[l̡{h=u25m>O ZĹ_"M{ QJKV" YJR:w0EhhudKiD07gem̾ imӘLew0ӠϥSJo|3%/|/Apb~z(I \jAkU[_ާ^so}O;l[Mg-؍v &-t/լevQ7.5wU=A`p>lLV(,ZؔOӫ-sFAo 2ij(ѩd/Vzk(8 o5ATS«aXC{T7i"BEP8c9ֹv1a(\Q3)\#0n ;KI֖4| 9_CNw raCPkL`S\8XƗO}{TO̞K Ax)J2" PpJsF A;sEmDxk:gS^϶|{⬢2x+vYbF7뤋֟Y'Lg' l(t粡̻P1Pj wm\jQT˂'JHbFsP`!JriAبuL1 =N̙)yS) |g ,I j:n5#F\/塴C2} {"ݎ-I! OklM VH<ޗ6}@M?Ft2^k(8{5&~A[[ MxDE#^x(#3jICHI349f3mWc#XeƅxEZ8Kטּ aaK zƠ录ZM}"x<*D{0A%(қLH5'+̴s9m%!HBmHD.\4>Vy5̓FJwW%$eA e_On =MJIe%Jd[NbI$v6Ĺ?H^YQW?y|0VFsK\Ǯ6~caV,>q koޜhMoW~Yܼ=;|Em\[r|翜U*nD>l{iƓ7q' 嫓5Ww9~S/7.Zv&Pv@f=PPq!⠪f 57מP|}w諔0zgw9bBAcY] { XWS E.X]qc$pJ>1*2}(YUcb+T+M¶SOᔮg7q'7PMa-u0gQzbG:Gi[2*g@==Ćv$-o磂&jj9!3AL`C7 zcyzb[>.pu:ՄQCU ·f^^{jɡN7!&:A#T<j[*c!n:Jܺ% %hrI;r L!t߽GWJ(Ś82Rr$1XUU5C1x :gN=H{ sJHbʠ T 1E8OQk; >% irj󨼍`OTo_L E1hfajzxbdC&Z+úO<;Wǩ[uӓnu~}l+YuvvN^Gj_JLc*r[?RNQzH0JUv ;[[g ck!sj$RՖ"&7޲9 -"j@q=%qbrˡ\B'3vqtL=B aG/ u[!-aګcizWX\|h"q튲*s 䩀SB"U2fkgq-;܂{MUm4J 56)()YC]nW$_ZM%ǽ:zǎŜS8{AJB0*+c܊`XA@!JF r!}~.H C aI'&,$H G!Y%!)CgMH7iS1N#vӏC=gS@*P@l#L$ޢSuWtNqYɨbJߤzϠDn\ i:й'h~&昘{-8]8%aHN$2QgM=":_\n NiMgH_-)e [+MKd,BP  ^JF)U)'K;,HuUq|t7EPUGLяh&ޫo_&Do;C49P@I!臊-]0s4ҍ͍y|\v]|8>=9:9͔ۗ>罥_aA=l(\6I֙(x(:LM L^5{vO/!(Ӈ?Nɧy,.*O\˸V~5!xrr!O~ᚿկ;p5ٿiNzH!ݍwv[ E:n/շ}M =}{pzhy oUT\#Ƕh{3A=Ѡ^> o[9-ݰ[ XDh\Hgg (Ю@Z;CCb`G fFz[R97P]fHbf|QXZQ }%Jm ꞈ7n-1XLSYFK R!* \QYm{3G3WK̥k&[L@+ Ar*eyd`Xv24UŪ7)kZǛ7={Oy7ק1{e~I4MJ`LrEPtcNkKU~ӱ'  `JR-ޤ($rB%M:QդzG wq "nt\yjW8Ɨ9*>c;31K*lzS]&YLd2jU-VXf&yfX;)Gk<0ҟ~v;o\x%0^Dg2 NURhczfMNq!l^8Y_$/\'AǧR{G3y& ZYxA]D`Pb5q5.{MZSפ48[K-I(6jfG)l؆DZ%)]T)"j\.>^)D\ࢡH:Y|O}(M=Jgw7Y_j zYSOh~0_mFZඣ|]q>؋8.[=Ev}e-̨\-KQ+B6rb*gjuJ8*l,wxwXlg9A^Э"$(,PAkA^L9[&Y/@W; Pck&\@V)b^iVär)ª;cG;&Ξv6ߧdf6U. [ w4gjys-Se`#~-[7&S\\J 4߾_;=3?.LD-8H;-XCQ#ly4M[8Y f j̓ftknTƚB*V΃i#hc)gTF$ A gCq088eJn:q`=Sq2MαKAX&[Vc1<ЊYk;""U 'ӭh|йsl:bS2 B޳vli;Vл(ڒXa,h92z~> SEzWF<\vOf}E jx_0AFFѻy|d1o|2rJO~6Oc;#wC}L={gx̑K4߹V57/2f?kf$.O}SRF6sgUwn>U/o~|=wuI%QdrZ38b,\X$M2_zb"4 hBa3Hx] 71dMbιz X[{4k@9 DL x[!69)4WZ/d T%[]{%V~yJw~D}ܹWTܾ6Lti90Unz.13^}&ނzY߀*BAx.jNƟ?"rj 5W} F\.)·߇Vd"bd\OzҴ!D4hy <`mHHb5(b)r:W(&lL0 J(F:QkݒF 6|K:Uwny 9-eB<`2$!8k)C5A]h˶>b)6h|0l\6ݽX6ʦqp Q>bjK#WHzд<خF†fi4Xfٻ޶ndWދcwn6v&~_mmdI$n~G/lIeʖSCsHpf83 R %` TԢ2ECFJIcc@5?Jj^wӶ#s3uk>[iEJ&|&l_ӸGtw$bsțB;URBK@k3eE<ըs"jDEXG6^U&f/$וaJ!yǒ%0q!2!hU4@*7'&gsTJ%4 q(Gh{grNH۝K`fwBD $(-/8s0Pj L"5c:RH5cl 8o!}Ab s{ʵ7mRԪIeR>b"lp rD),DPqT~ 5Iu~!-:F n5xcx(1p'd^Ƅ$S w*Ka|՚lB0,-ȼS+qOߦqϮ)ӼЀV<+yI]2oV跭}D(zl&Xg,5 "CGToeAdYm4Q Ph~OrrDU,מPsumuU DѺ_ҟ;hxu~砼'{:b0Q$* Tkd֔I@f9Ghע֊V|miQ*D  #D Ox0ɒ)"yn}7!׭c8~b uvO`IƠc8cB3"71$) h$ )Y`aeb nP2tQRJ[vw73̠,/♁VCOr1n+m}DK$4 `(˕B#RqL%DF™S48[1~1#DtF,R8B"T;)gaZ;BhUs2r"n F{,"qj)ӌ]k93~Z<rdku^vsdx؉cҰQu;$2kݴIi$Q'qlO6)U>0jN/@N +2%UDʒ귷f$1tg5^SfnV`) )':mzđr"X;h>Y[w9Pٝ~5l yTyt_`0:HۿGmgp:Tۤp׵9I dP0"=.B5K' v̿y6dZ^!ßӆMĽ83M353B|$LM;P9[6`8evIJtgwmv;16BgkuqS}`@g"Jx ZȮI:\Q'>ĵS҄Cl$"p+:kmZC(M/X$"q ]GRgՎ 2(kH⊒$99ۙ !r-IpbSuzytVr@鷶}]ƒ lB^Ls1hj9؅z˪FW]ᆡ+'Hm &xm0mN>rtw~isˇճΟ |$L2""hKhD< %4ɑ[TXDb,1reovKk> @yuƁ[\{*2tg{jM'(}ZqkKуCVn)! SY֙z޵od`Œ\DcR <5؄j('B ) +PjTz1RW8q9mZąxdK+ AEԔ[AZL\V"#DDAw_~$ YI~5nŝQ)q)$"&Wuxܸ`#Ǔ)[ *!OGW9As4mH+-Zc"Ne.vH&|3D4Q ZF*H#"p%frUI`ށ$R%F2II6Z::g~jQ9[AV\̉;휝O料]pV|9nP?_|r'dHZ#]rk0 2b~#|{ݫD\t^NY;*ͣ^rݨLh?yhi痽}gU,q_g5ۣZ$ U}/&Q]_(_64{ _P&Ѫ?!]smTNmY7y\o7/맷?{ÛwW{͏/,+8<L¯!5yǛ 4bhQŸʔ|5Lj9_Fh!vS%KM>inqC cD,aTz {1C&P  ?Z0Dr8iqxtL qH^Yفg icѶ4V$\222C9^J Z\"Vh\J͹;/)rsPBĎBĔ";l9&75x״Bjc&P<ⲋ'?j&B۔ $jè@0*'5j]Ъ&)uTI7*L@A03 ,%| eؒzh!fH9ZўE&D%D5)9XѨFȹ[6FC"]%"+,Y+[9o,Ҁ&*(2i $#ٻ6$Տ`pIn.MeSR)-C,J9|,4UMu+Ua/L\. xTֹZ`0Ⱦq֨cJZ3!萿=iVy+}uFKg<?gzqЧXgg\^EhB ˵K,5W N2WxR/E R )zz7ή/p.`PaO4%7ZjmV'z(d-~ԆC f]o7Y7ww-WqkPq*=b-^Yg9[d}squ]%K6x9dHΩ=& (3iu{EbUvN=(Y}!(PkWY;B3ߌl_A̮ۣJk!>*9A:b SW(kAιXdIk,uy:6x_ ƎT{;x4o.$Z=lBЇ XAv~Tc%2Gi;Wb~^'Ϸ'6uۓ_7/=>~܏`Nw?ֳ47Ӽ$?P+AR GԷp=_\1|`귌B[9krVx[sr:P'/!P _3vR|! ] 6|狼ZW ߻{ouBtRm\a4ΏF-(f:͕)͛ M _*isf"j:yQ%.[ou_ goZQE]ֲo檋4ZfSi *Dj̾zWzTzQ*(k C DDb0-M,NzYR-)*S,h488;G^ܣau"gYSMv;}Tf;Ow-!ɦCUZͲeo]p3+ C# 2.6A%b>nǼ:T |&jl{4(\LgM٩C#"з\ S0|\o??;w!o.8Q=9 [s&ƨTH@)>X@k?{*(z5ɑs96y̅1sagbC5H5EL4B4jBF袴04>7$S,@y'ZEeFclDAޚK!v;YtI'팏RAe 6g-2U1 Je ڤerI4*/v]ǣ3r:O]Dd'-yؾ\1NgNy̡*Kg`9,.(AcZG+%`ȪzNDoB`CUݦG61O& zIG<5Śzeu4ʳP] 茜%At2.b{G+_CN>S̛m+V )hjLOS2AA$c'oeW|*+wڪGAC8yZ׋6 W;H)1M #JcD!dH$i" -)BE02d%e(lĠS1)TN#"6g'(Pb@JT*I<)O:ZHgl*\{sWxc/'ߢ嫏O7lAdܵz\uHP# 7'Al|afx ou̍ ٶD>Jlb7W]^3/67h n_G7я?s<{⡫1Oxt5ȆG6 n-'Oxl~휷ɸl60/RnXެqTWrÏ113gv~6<)dqN0E! 5ŚnQp.:]%-$#E2"T:y }U5 i 9U:)vFzOk_~Ss}TC[ӹӾ)PU_8.j%fK3/^ 2 k&:+ͱוڵDוJ{R8nK¬u[i]t`E 7kUod;})@7F|hd{э SeOQPsF w"n}s}qNZv8+kBśqz 2U/_w,dX(w}ήrQWQ*ND%IHj N>F] _eN%ےA uS U_F6l˝'6ϪCr$1I,S6[~/gN.\ T?{WƑJ^f7M} B1lK1F!JQ'8$ n\$De*TeevVf;#wfh3z3TxGDJNcRɜRafXT:EU2E= 8` (O 5a}ˏeXA(mb.gMgV eH X: !j)0\(Uf]T<݌T\:SýfXhʦ@DX{-9J%B`BeU è&#i#)oC6R 5)$iʶʩH*͍]HHO8_~y`5Y'd=;?6gq!P!^H?8vǗ")hspQRتB $XkM6T)1ðo\V 6LJ SWqߠ -Dz<2!b@ۄF0䰏ϓSwhpi;WyU6W  85ia_arr+C+;n]ݏ8ί=ワ.X$r)\e}Sj`fO>U՚xjSzCQ-rl"mYqGH͠T.Sf cjgetu>eɫxtQ=8gKQ)1v7vR=9Clq ӝNn't3Vʞ_nn@nf~#2 f0b:{d>ۗњUoJ^'\몱,I*?pH]sJO7?_t̕pܻwBu%=݄}y9?;3?J .#]*l)vBpnAl{ExR5f>Y pCF tMvDzv,\LtOɋ^xu⧓3L?N^o.:\A{[~B[p_l5auu Mm5uՄoЯuyE? %K_a@`]8ޙkuE4DO*^-WN%?"R%N) X8Cs@8J !moSbp7G8LK^_du3=B#f/ObGs"ڨEE.'JYkb#2)M(Ɯbκy;0ܾc=`l@\[olKts:?46w$]AOss-&.ٓ%pba608ZUP7ehNy99>0DAƛf|>‹'(XN*qDm%p#38ƄŨJ*tgCJUXҪbO5 iVQ6Y;dƊ8 :\+pvZc KA=0T!$UH9 qR;mTu'2HWb a{)DaRÊ b}AjN‰0.I]g)r.1(X^SH9AD s,QF8g }1Q*x^}1նAf:C2ǭ34'21B07YɛrlMQ>G^bRy۠LnS|yŃ4iX Aĝ&_fzdv!J G)&L.DpCLDn,,xGI4 Gj6CFf(~Fzy9F=t/н3B|;;7U[x+c&"^Ʉ >D 3 W.Z1aR/Kx,DDꥦ#FJH8 T]3: ָT?Ci2}g?:ma?Gz)bD:}iq"1bӗQ2X9c$y Ǡr֦QDEх(T)Xh$N$B6'$uZ`͔SRF`Zlb+.>YCZ-v$# YaKniI?ɻBCs[+dXLe[AUcfO@*b@B+p¸xO7NQ<"*Rl'r#x<7ci$&yjབྷA+JtP<{,Ť<9QF"y\7 0rKp=uAw 9o8Z\Q$8J95v8(JC@ʼnyFE>8{.B<)8$0tQEukvm{7P1= /!Wn7>w.͖1~1$9v4`aXkDXY=JkJY=`@sE`fy`q`VVN I LmV1FFivReuc)?ƣ"z͸{9,zY`RW/ݏW5[E1W\_p7adZ׫ڿMJk}ΰR>㌈ UDrB3[`eQ$^07s ~UG/ H3USv~r]gI Mx$dmiqՂ^jqUbjr:.16$10Mre1gG‰uXXI88܅2l9+r$!!Iwmvvd c3` E1|ѪveY=iXKsis iiJ@-DkIkYgefKy:wSI)yqvMԈl ܬ3rt>vvWОX۶thM5vc,e+w9IU| plcL̰c;&jXCQeÃouwXgw#lynr;& WnI Tպ{{C4ۇƠߌyD &նct4WM{k!'"07/R`Ffyh3xf !h#SV C2%(" 3ʅqZR:y"'bԄdpϜQZGF ^e?H"=F$FQ0ĨBSܑ32c޼ѬGyWy]kL0td25½U{i/n(JEڋ[A OC"!z 0wL7䒁13DA4,x9!W\eyG9lsϏ[SeG/"egWG&@? B:֠Y0*ʱA[7 l]7+JU:l'f~^+z*CS-D+FRUgi2JXI( |Z9[y5~gpBg#LEz.{V'mӻ7P @BILS{K_ DKf#^ g?.gR%|vPm-Fx@2` e8+[~$O`- e4p a<6W+{[>*0ge㭪]g|N:,Ԥ4x·do_g?.b/7O=M3؍UoQODGҞZA10 )Ɏ%Ei XF؄2BwzH;HZtug󭛋ƭj*-)QLmhӔY|Z?sNh:42(:AtJمD,cKm+e:XDXT`dIZ`iU0^*UDXvs6 .ʐRE-(KB_w;e1l&΁k;!{H8ڞ?NBbE*!d,il:adA6Ivw]/)Ot-Y|SC L@P\̺5ֿR"b*X!$bO-;6f7ܗ VE_VW_nLO=jvsj~ JVu6_ɛ~}U Cяԧ-֪%粋9 Հ͝ 6A+'S1G̴&i/ 6ސ-)P $@H =9R$mSaKv,ۜ A526g?2* SOiƱX(c!XxP|n>Uf%L{y^>"LO/\3Gl,H";O-#TQLbmHJ~P JX 6)Iŏ:{> !)*l2KF.¾+BE!ZjJ泴biPP[=2؝5${cUkH@Ƌ0"ƺ -1T@@nm4*b/:lkRhDI (4Ad}J{~6Uq(L?ED刈#"x1Tmn#"fu2"$Q,@$ೲm!}㬕DWEj53!HkL1'-ޢ8xcDl&~D NXBZg3-9uc\T#.sE :SD&mIXpCDk(/E R λ<)pq0띭c(xxd9@؞ˊw5,8qFG;7F?>QEr&i^_׏V#*Qi׽K̻ѐ4 !n,hk\_@LJ=nӬWv[ۧ7縠gvԵ5$ .٦KAL9Goz'm7.tӆiQ>tXWg>Ε_xNH'c'c᜿/vWe^Cíc} g+ڌwv[P3t^oJU2;k뺃Ezw %*A'qzؖ-#{"G;K݌e\U?t[kQF)Uws}5- KVc ]:g":;`zK]ʳԯ.Η5s`u1  ȐkQuφ\2@Lrnf==гg.-D2ژF7ytTOD(x*>|WYw-n@,TN$;LC+ %\ctXfkr4+ Fl΁ 9UTҠcJ6gR&tP@4x-0S1n_{k7ǘ:byAsD}U>WA&E4 ,tuB8 asrNSH6EVŽw&;DGkjv:zv EA%{,Fz6ur3q[wy3l;v5V}bݯ=˧,aTwMz͇d:RCt4. ڮ(ـ>ѓ&%CK<nU+rF0Ķ@EѤ l K 8YoWTLv)*t d8JJ򦒁,B`3]f%CFDe+/T Z+>X2% G(%-hfQ#L HyUk8 FՌjś'~ҭR)J1C]GDG]*J g 0Mco>] ^38=gLPtZ>SŃ7k2yXQ":6+",ۧ̿>wӭi|ظs;bcԀٹ3s};2Q0ԕ}(]`mI {, ʎ|v)b?.wWCQ?5?T ׳ٻ9[;f? J~3_E0ƙv>nO3{^9q捻tުק~X>H\\{|ɞUs}' kU&/.rMrpђ䁕NbtgQB& cS9jN\QOTN0K`D:MR&",:$s*NX\Si5.!Q$"Rv`R'BRBM7N Һ`+q6!9G|Zyqx7YMW\SJ+s4HS[T)vfYvaČ|5ZbsZiVyfAIDM( ReOKp@$-39eSU#*U|[DKV z&$,ʬ"ȶ+K^!DҊ`ɰ(Һbr&gRFļ0'kg?O 2~1Qp8X,/ a֑P(z MEv;\rHEoee*# &E3 Z~kJb'6j)ke7!YW5!L7R R#뤴rMUA0 [C BZTгWɔ=d/Am u]z ^[ hs}GP>d,_N,mB[5aIy]-hcC1)D^ 0>VPV)h&j@dS F]8u}AV]ܱ4.>rCʮV$k w|wN v_'joM~~\o&~wt!L߲(^k# +߷ V3iX<;*=tRa%´tfÜ>\N{\O~ȗ U%?k~P˅^gL|TmtOKYR`|+WFb/>k:~ԕ*$ЧD"}gH8E%tMwWSut5Q]FTW#ՈjDu5Q]FTW#tsgsʞo&<VZT*G}x٨]gSJu%v]I]WҮ+iו;PZ1K0/&%ﯲ J\RqZ~ Ý$Rr])Q!cSLT|h9RDǩsu ,naΗCcBZ~f v1r[V[Zq!-JfֵI9^{|?/؊I rqfKE ): ZWRӛoSy G-#rJj#:0(I&F'e 4IU"qC:&hȊf Zs&)iMR9KVPY6ȹ&2q4(`[' !e~1 ]C/u?[[{r"!r. b^8+Є+' 2g!@蘷杣ρwx6Gev' ZҁUnk|7OhB@oI g$/3j+B=|;~{rxVcоj+LJfDgQ6OtI=-x7SJoI{lZ%?/.Pq7F^3-fw7qM_J.ִC3f((@{GHvsz[Ze*?X|gq{|ȿ _QfP|Y#8G-\jJ,ՁJm8z׳q5 {z5:* w@ApzO(4zk@\t=)Uq;8';#OV>|qs ͧEo"n>O~YN !gBۓßI_Prhv2'츞olJ=iH@tҙ21ɜt QL ?Ickdbz)I-V7vE;վ"{;yT~-')c`>,Qzr9n<]<I %dv0)^N8h.]# r.P߮x?"fYٻ^e+bsnbag3&wngC]ЧOjlPʛmQ4%bPTatʡ ɞiZoi Bγ=0WW݅{u p1_ity=4Oټ30@%v Qa&lƼ'aR/y:ɣxibtDaeHZVCxȝ7(N+C1%N+bvKb$ +뭳U@ݖu8:kP+FɁy/{#T}ajub*hp5$ MIR\wRzV 6*;x;B/@NI.v~Ckfw}5VE!Cq˪g\Pㆢ÷oۓ'Ξpfb`eRBRwHnf`%ˠ+q%HJ3 x]=Er vؑ"=EToEvOv$j+G(P3I9aZ(`/cGnjx}z1!8X!Ir1PYa[$'QiOu9G A9Zu[l2]B@̼edpoulޭt ܊:?˺ZzP˒j m(!XS1NRN}jqqу.5n*]{YzlZ<,9[)DRhL L$!@ F * ",!E"bs A@Ы2FNp gh=ULCM`PhFΆ r$;ȬX|3%}_֊#?ƾfwqG5|G/zBnGn@Ecq9E#oX<1% `w2ɕ{y! vR!Pk8"W3"ZIV$0HMHEFZ X-\*@~1OϬ9q g| JH>Nl6@3uX6Y-X ?n?ִ # Ę FP&9N)N>H92(E!: ɎemxBkEpZ!cRmq歋y63⟕W«UpB.94蟞cخp `3;\ _ BzhI=7uAbysdpT(~6Y#W3PzV4v4gvczDQ}BsC2UR/ΐIoNyo7_O ep_yyFˎ6$ v V߽qhko5Aײշj+7~ןJk 4kpw W8ip̛%*o^A&wS4U/Q\:vG8,KY߇duX8xRX%"7T蕂l:%b!pʥ!<bN+;-ښ2ZvȖ.;Dn/·i|1'y0Oj5oD?<}&vn&=EE)7y'NENc</w#ؐUw?n v"_F-v \)EE")˽.zU5rl/]^jgQ/-uD)袭$![ǡ}qzQ-֜ b?՗o~xo~?\t⢺ߙI-|{AqyVB *M5j-X\6$5_gC|vz#r9{9n|&/LJy1Dah:|C趢p: "j!k0$ Ud$TΪ()e܂*Rtg, DT$ްH $RZ/5T{*H@Bx帷R@!J9H&.-KKZfy(tnP,VhFUdU(ir|tׄLcdrG 60=<])~qKP˷֮;:ܺ_ieS]_ֈY׮[T+CV*`T].͇pf> x]v& ՠ!((w =)DѯI_k}K%P\JmE:$‚)u\K0Y S2FHLHPE [t_iGTdHg1,K<7Mڧ&m[j_Hw>eA+g xݾvL[Gtjzg}Vf; N(Ls,#gLSݤ=O5jp[J,֩Zֺ[QDRI (TPYr#}R80ɂ3&,z Z$ݱ9j>S 9q$2-y(& 6ώ!% C9o:*$a.j\p>Dazu*wt{:sGQtd晤bF[`t - ((r1$֩GG)# _ ibN!ƱҎrU%7:vfls'dS4n_9-<{?>$ (MЫ"|ݩIzG8ȡY ZYXmDW H#(DԢ9c#ĹPl 3-ea;=ߗfzeUV"Gml8]rkZoLݛ8Lr搫kTjCEi+ PбVHlh}d-D*:QP,Ӛl"of ?,s/&49PRtp*atZ#c3qGv\6ӌmP6BѱpXx)Q}Yek!//J*}a'tND;RQmfQ-*c2'k*|[VP%e-ޚ3du:;+"'mI8lJ;Ls_Pv jˎڝޙ)9 hQb4XH!):˪Aᡖ1y-ƺ"Cff(4{IcJII D4Cb}s?N}3;Tq_~l1".@"Tmn# A6l &&Hx+8 ;x;@*.2"G,fߘL͈( KQq&P)”ؓکN5}1"6g*G:2.·qSl1.ꎋhekdE2YR{"@ΕZٌR21胧ŽwҎ}C.>|yo4 xqCp3~TR젌 jHE~eE "iLQS D-nOfKjAH%-|pPa@D#&!i-$m E4K2V-;MFkwvX\/FGSHc>MjrThm&̕z(5G$Gq\*lqEAx-\(dLǨCHN~IBκFg0;%rYYpWjz&g|8R| @ւːW\wk3Z^-m%('k^$x:l^"۶eSŠ,HeNzC;k&Άv-ޡ|y_֐sfSO0b!@ 8 4h(y#O6Bb!uhi_8/ g2xkoD%6Gy4U/j5Ziq'H(%K5@et^< el "%:%M$Lͱ88;ݗmf:oAg,b9TR  FŢmmey56zrR&/\Giaf?> <{LE4>8_9&,6%"`m ;5oeؽ-YeEZ *Sn㒌U6s3 :Dc?bl^[L?H$9 36aLB(+>J˦=LoubvzܢΈP$A+ȃwF y長EK]ixQQ9D; m֞膞}Mx]YFs)(zi eʫ̫,OUҌPb͋z˛'9+*' 6*ag3z/ˏo׸r׫+b.Nyl:_k\T9zw6 &eLyr89gtvqy6YySoѮ}kOٍs`3iDEjٖ7pHGۚ^Z`0Hz!0huـv=HaPiw/Qek¤Ӧh*J`+-ma42hI xG3q6ê k}Lyg{VkS<]">#w]jUhR$'wѢ U=+),?ЩYJ S"-y!Ae}ɛ0$`TOXF lT%Afog''nKWosW>2}_a̮\*bƧCOufpCPȂ!R$wIj LUx)MnPIȭ=|{;~wlW¾mWVfuҾRۗ2Cu4[qNOE uZ+!+Tބ@.X6Z%g,ۑy5h.V事vy\ϲC˯˯:<*}FS/(f.W3yAZ&+|\am^v@pUvWU\4W,wRp$p%wN_Nꦖ2롐*cHdwt|ONr$0z jM~߷?Mf'孙&{O\^|ijoO뗷lg9aON0a1ϥ}bU3:d֏VE4Q뒴û{ԃ>Xd/oΒ߱\@Smϧ. |OSVbK+N!a6.}QR&8'$o4l { ˠ!yM##|,[),̪֔hTtv!A$vXq 2$#Uq<GJkp9iC]Yhuȼ'[r` 4Oߋz~fڎ;L&brV7s hYĦG:%_Ojq$|'Y}7^ݚ|A*콬q^- Z7t1 2w6?IhiDp?TwU93=||w+m|ػ^7#W87ʝwCM^?޿ٻ6r\WTm]UyfU3& Ȓ#xغږdn۲ә_4&-;A[ ̕bt.Xs뻥66yg|rsA9u_^ YZlsmx9ʽ8&`eF#/0}.+/cl֪2|>Ythe6|ӌ=.Hr\69":%U(Q;,E-rFzcp}w@clVJ`5[q4E@zAn SL0 *`;L `WALWJZ)Z ZٰF~?Y>oz3AoJ( &VK^<E99b4\$ &\ ,0up5",,ǔeNϬ "7ij)rnƝ>:{bEe t8yRr3$.K찴p0iD@ܑTT >eA1%44jRxK$y$B˖ 3 }4\dQ69;^ 쑤g9DTQ*$ǪMU c[ a҅dգI!Ip(}PU&:O";X2n3M O;'+[k2"ї"Q)ynrU3FU}2O2&\sT:2(4SًwS&U[A;܁Ϩs!XgS湐kMB5^Pl[+,LKzHq~h U, JA+0V6~xs۳{г vW1+&?ij̆*9#@rRGϥS+fL"9h|ޚix|aY.ۆJgIB)qDv~Pl)g]x*Z*ԫ+N`9Hc`\O5ūxB*`l@mq4HS .w!%-%ӌGiIEfTF RFea;y]%L2@,DCZ{grJ!ZseT+H ]lz$8PY8܁4,C&R\ N!Y@:C?~[9,ALxǟqjɚ+#r&Wr%$cPaaH̸_\OjWW&5y?^#O3XGMcő%%cK6@{88cF E88xBJ ]5{6n5K(rpc`,yLiOJK֓%s rMH/աEi0)s/f!;t9ZffY&v%R8JS+rW,F9'J՜#ΥwLSv2}CdUpkZs;>έp˹]q ~>ԅޜ`14x%4GjHÐ8*%Ai0u`ŲDcN{-/gU֎ l4wRHjD PVB|A73էmo)k vO::Թw¬MT -4ܚ&bP^|LZ5wʱ!?Z:0bK8ݴ`zg'$$G߾o׷~^=z7GoxM{_i]&)vuM/!\7?>4ȦC +wZ5ֳ a\| ~wu\^NcB,aM~om¥[mhvU:A %!d 2u@!((9x6p7mPJ(:-o=qk=,B3OsC-*S,N'Q! Z%]ug6r'Q V2V{sǃʆOlw9yA˗^^^Umӄ\ oYpӃ%+*6!Ih KyLڟw嫈IV&}09aQnb&sF{Iq:QXcٍwZw #fMۣUQ8&dXpFdr!e6'* "4[WVykyp&)Y^d# 0:a+E3Y(BFk &l[96`?5$8:5X.8Z +>& fC;N+,!,}>>/\^)N0Bz2hpAkQJz"&ecoR ^Yφɾ8c}swf(m1D9+a;2?y;A\:s "+VOkUm5Ȼ+Akf͚be`uHօN1LUfوLx2y 1sf+&A=1GdIw.jfAB) A 7 j#g,W`9bMy0WSYna&^W>vQ|˒]c:IMc9y0t{x<)&_i)48ˬBJU 9GFR)BF˶enZ`-#-.CYI{/e(pm/{]paHWדY]kST:OҠ3veL;BLӹ4ɵٴӵy70~lڿlvbwOX] =SX0*|GYԚ hkeo-z%%Ax0uiQ~Z}ol ]Ym% Js~JNlnuCi-D@˽ǀ#$"rm;Twѯ34B9_g|Ƨ^ >O5{nd2 $ĩ"KvR PLsNVxz{!lV<+dެ}D nT q{6gi‰nKIb\VXIl^*qDYR=@ubػ=&ԭ}sY>j7meP9'ЖW Ęe,x_rf L$rH9jJBjglOkQ NZ/ȍ#FwB'[œ ,kV)T c1b")dFڑ[ӌ&'UOy^a" t%$gdC&#eIY/u O9E6^lhV Xwqkit(D4eiHw"^9ܾ,GyS) }b*aI2$jz1G$ZW–6z7xϢˋ^j0@wRp%L̫(":+mnH_vȺؕ1:F~X먫% @ɚ l8DЎ0Eݯ|Y* dڠKceP* @V Wx3G30VoVlln|5|r ꅭIjHQR=V,~KZ{D$pX2J\#riKLL@  ީQ$ba&l<@߲t'V 8A3qcl|wqPۋUcS>>c0rփbUIWWz^F$kYЖ qH:XgP{)Wܙ5Xε6 a'ި~߫v*v= =|e˫) Ѥ$R ջdt)%x(Y?my籦ϣgɣe͛dʼJhA4t Vzuc\3b/RlGFlcڜTSZX9J@e\\WP+UdDrghdմ6,tmң ζ?p ^/s$GYq' '/݃$k&6]Z$]*:*JU9#Q%'{U4ST;؁6wҼrkgZ `G) /T1%\2LE͌J؈ ]b6wo͓Vgpyn}k{o#zѫGR15ȕ|Ȧ_3Cgg-8vV5} J:?+(hy/7B~=_~pW*JwXh5\4RSPH5ꟵFuv1^̕/Nj=$[o+|Uu[^,:MP\jnXĽ.uQ|7=b~Z\-x8xݷHz]\wJq{ &J-4smdG%&gV])!ELew; B7l$7gCG)M'z7ysw;be-Yr{Ex6ɊUZ9 JR) KQA5J@bV[^D3#A` {6iAoZhˡSëm jlt=j8]y^{l6 |䓙"'ա`^xzW}ej*kpV3Qܭv(]UUA)G~XsF&)/ و{mMRH<.6U mʤ E.y*{V79i\f<͌* 643vB0 jgzM7K{@hc''q:  F&3yŭ1 CC*@+cQңeCp%kXl2|L D-)j. m;f]%epѱhl'MyhVL̶vc--z5GsFIxW,1'JJp$TJoB&5R)m| Vq@ 5ChEG rM,jI (XDmJ06i>Ec[+#ʆQ332Oh*2LLjgUҕ iaB/ J *7T,g6 $w6\&*@<RmhI 027̈yl> /NP۴,ٕUü=/`S朴T~TTIHgܳȍ,wCbk;mCž=X^OvÞ379{'YH ,d4ȋzh>U\6oܤS Gva:.λ<ޭ?\f'ŧzXpQvb~ \Rfg?{ L^޺ތjTSzEUM'<̻m ߫zQu).[#Rt%.Ђj{\T;<[I0~Blgj"ִÈO \u+thgm+B`tu:te X:DWuw-ovE(:S wiBNCꋡ+лs7t598jk}0$$Ft4>>Z@'ث&?qxuE1JZc"聯Y`B5eܔfU +Uჿ,N]Gnt~_{7y=t]^sH|so$\Uwu?v/mίSx48-5ك8 KMܜ3 3>/!"tyFճ쟳yTϖ hMZ[}gõ2\~JdrBF= WuP^$]ŏ?.>6W^Q)3; 0fF+v&OZHma#kM\m$f真w2 SvA|\WuնBkؑLPmܻ-t5j=]`:CW׊ut(BOWCWBZOW\VҿT.ȖۋZ6kds8.`F~QG٭ {s3*9/+BsAݛ-7)hdl/&~b!yWwq\>a~^K[ KƱ)iRL># 󢘼¿95w,K*& }MWYJOHDžRP4 2JgBtBi{4Uk ]\BWɶD Zh_"p]!\`+thiBטNpA63tEp ]!ZXPr ҕa.vg]QfFz"tute9Ft3tEpwn4+ǁ[!BB+ +tEhh;]Jz:E2 x  ش#iQ"t2Zd)ZeomB^A:yQGd$]XʈIQR?|Ɋ4|)M(;=`8pu#ximb5 u彽. R(n|o[|Mw"7T6P9P[b"ҧSӕBszQ<=E l2Z+NWw܍ r|_NP.b.Ҡ B9_ 䝏^oQ\7G_ة MV > eo%g+ka,6734WRJ:wT9)׼a9C}''ʈmk[7FH[2g1y'VGn]9`]ΌzRkLꯊ1hZFowhCW57ЛL[7xm;XJ&F]"pmWvlpfRrl)1 iG=|'F4}MNc}؉[pʏi*r"p.DAf"rQxxW64.M2(cZFbASȃ ~|hKfz@^^'T]hͮLMIMoJ-iG4EM1~ fۻ)3ޣ3Ӻ}Im/FrnFRr mɍ=́DHa#غ a5ٺV>zkތ}3;Qm9ŷEsm B' SLV2]pyml!K#H]K\a҄UKG\ҧҙ0y*a +am3z`Ռ }}r!\ԣY»20tݏ=,y}Y^<ȷ#9+ EECpugm}L thNТaB +ٝmr +BMPZ ҕ.!` +k;_h%k=]Jz:AR#':vLD+9o;] $H';Ot]+B[3entJB+>ۡ+ +tEhۿwE(%_GTg J֙vB+tE(eˡ+skvtuG>tZqj;e߂xOW=]qpm+ [m ռtE(JVJ.JB;qmnya2>\p/?P#ed:}~sjEwu{Zkg%נJP6sSVUY3>챮 %  W0ж!NQo r+l ]\̮ 5tE(ճ(#]!`xCj"FNr[l_";tEp ]!Z[]4j{?W]@yk4* 7NVlY1G-RvsgIgg([F8VwQ؆X$J=qE|1FILh镟wG|ϟgS6q2ЏX*;JМ,C[J*UU?^ٓog?d|OABQ> ^,.%{u9_{뼼F2(25Q|b|~pG$j :*fwSo⻜A|WE`yۯo$m|Y.^3YdD]*`TҞN;H*!  $KJ#]I.SrʏpgZ{8_!aؑ #Ƚ MvM e2HzHV >DJ% I4٬>"0D;ZT>(S&rʲiRNҜ )>8Y)̟;>tռ\[wjxk2 8WWRkvrMGg!b7^or;t^1*Vՙ?ôK?խmt S/kkGS՗|E^1>_5IqћEW,)*dx6>@rgn!cf[p6"`De\8Гs1TH[_FjbFk@b5"33t;9 s-F)<y4g+0V\1 9:hA l'VKnUbd=4;N4: avKQv5-O=kKF_X/=vaF0<3~V1t'3LFJfSCNӛW}e:^]t;?#zxVE vP3Md;&]Dzn)v;$:8P=!޶8\àwXpکw@3߉HL/Tw H6aB <:1jS _ G2rk˱3lQ Q`H:sFi3VBaefp 7? `>Nדu.a j v&mF+tTBgTA6`yuJ{/jLٲ0KRHx!J2MYP%3Y'm N<<ܣ!Z&y&)"D_nJ{A\odQ#O-y|}hO?YلkB!BN 2xH9W &R *8,4= W3נS({(Mf7Vɶږ<QD8>h0eJ0C .+pP rxIDŽiK4VNKG􌖎h[g{,+t3B$p@[rhnJ!%R)=fw7{lA'k攷µuR(I!Fc#F(ɂM›g?b<͚=x88Jڒ@"ƃ$Mz! 6*N0n&=B#gQ٬y4rq pSJZ5[&`6Pc`h.P魧&Zc\.Ȯ@O 9`?/A?x ^˻j\:SԱ6S9 S7|c:iUM*JSM0)m"*,T0b-2+)UVzwo[wo1j潇|ay'i=ҢZlAnq]5QD)Io(SpA(a_E+C c4QΚ)xS㴈e#=+ݣp|'MXP_c^YE=橭2r0ɉ( 71Qex/"π=NǩT+؜AFtnw }:enm}kӗR5TOǗ=OhJ̺ȭ, !޹\w+)Fxt$mHa$!gh*jq\IN T[Tt '*h@}LZD0jڅQki҆.s<~~S "y;`]n{owK7j-iB/D¸axHR&: pa鐼mpFnO3gTN{<*qc#5he mEb8ؽooI~aiUyvwy20*!δ?0GSl_H2tᵦbFABFaw47_)@Ygp?c{>?_y;Co:u4䣋/@KV|b_pt;)loGW9_hj.U[:m^9}J¢.*Ss;x2oNG1?wa~C*Eж|3jP!\8NVcC+M%+} FsReon 㪓>oN*weHI@"aᤳEb9/ QL dҟt};o')7O1=ޢtT3:}a9#NR<rXx,jh=Y]~_f U*B-~E@]\3@UJݠfxMLywkô& .nStB^mH)88֚udcj:ܭ\c(|S2 c PP۷I}3emK@?UܭX(Z f+%E8Jdr3+Ym\kDAP\IPXƓˀ)[ngrero5b{ʨJ=":"YHVT$*fr0bP*!^("=%MǸ<nOjx}z9!ӯO8X!Ir1Xa[$'QiOm9G A9Zuo mYb:[9Cʼewp/lmo3ٮ9yqfV3/я'!5 ;i&BA5.a4|;;tp(rlvԪp7NʲZnNqn#'BFcR`*&i B! `0RP ؄֢ ²R)^Ԅ&ʡڦ)c$pqw`plNQȤ]І`/lBgC?M;d9MW,׸Oq_4ov2]s znz֋5,oQaa'<:+eZbeĔ3\H$Wɼ[p)W:~XĖ SgZE L *H`PđFYP;:!cPP6ϯZ g| |D_s9.7.4hqeʕ$Z*<z9liEAjO1[*5s$3+|sePBt@4"UmBkEtZ!c hd74;p!40Jp99<ъsN^NggqhyoP$Z3+yDg#1ʡ⫋w1Kq%79<ɔ؆\ơKm$Q‘}~ ]>[.Q2a.h8G'WT9B=ꪽ<?7/m˻tUrv0\ sfpprڎ-7Wc 6jsG'oOh98z-Yڞuݰ nfu|9y4n0 G0b^ǣbW}{Y+#xu{VFRrג^Qq0yGr32ͭ+oϊyɎB_qƓ֩;V~~uR7Td mdI"*o75ߔ1﹩™fX@n˧ӛ㿿7?}: OZ|/kDeog~E~;p}]3Qko5AײS|~)k~ןBr@ pSlQvo$k&jm [潗,S[H)S*Q\:'w8,n_]r:,B;Ƣii*&8wɨ "z $sXm@;XSd_S9}Ė9}Ĕ#[.lt>ч7:GAH4 x^Bwɫu xD5BP;C&&yA<7>wV|A<Dey И`!"wIi&$4g*JJ _ࠁԓrA`k @nb+p+qvd 4 gpKgf$< WD`)#8hC!(YˑSOCL$"GH0;c"x#F:RX0)&: J=O\i5n5p}21,+ݚv=8y~H215~Tuj ':)Q`sz KZq֔ٞBbYҌim0;.F5GXכ[.Jlݓ󴏴َh6*Fe>[+D!3A:"Mhc@Rf{]?g뻼$˝['ƫSahPj8ͱN㨚uykB尪>MȺW߾Fk<2Y33q[Xoa|+[O֯IOP33k0"Z#fn)-'*P2 V=cFؿ#lp>˳mZ!Kb8x] oTx-,q$OG'uh=8{&݋`O}.y=0j^HiD qLjɈDcj 2SluTFlOA[Rjn`!LJ;e̲x&+eY`]_C.mܟY=:~j5;=IӺu&=|&i+gCl~Uв;h0wNk _شd\qeF˅\ 3F4/{1L 8&WsM  Dy &g F:-:aϽ%oھGY8S^ܩP(4I#EUF 'j# 4͝Gch/ HB4 l ڰ\*Ԁd| 8.i-DFblP4vz\]u%,h.4!2BAIr~e`=\jZ\ q;cMqqg< , 0}tϚ5r; C0&q >A6я+y0x\x_B _8h~>hȗ>O bƓ˝֔]c/s$&O8?iNv8Ch$SdEXYC}&਍/-nK+E¡PҝtrJ,,I]@f9S .抗uJQ9 +o KRWr.Еtt(S/3܂J_ ])\+E^])x4/#d_T-m<+E?DJλ +&vK1JQ/CWW?j/d{~pCϴd?r`O=芏tءdٸѕfRZ'ѕ OW2%ҕstwC[۹WH`w~=M`KH RhZђ;tVG~4M[L{{dCs4h0gqirkӝCA>Y6&wI=d̂Gp-($h` GG ֫K'W-nZ ]mJQ:{HWŽ󳯠sչZC+Eݑ^"]YMS 8.yuh=xHW/~7C)^j(Qv/#J1_~:=;n,GF6U.M}=p{6(B}{! MfP(K>ןutA@oG>;&Fco4*o.,= nӴ!ذx6ʗ2qv3Grb_G/n`>IW|.Yw4mF 2zGܫ֬#m6>%T'īG|Dyz?[&07gf`~A}8}sk_n _u?~sQ/?+l}_d'k&nH-mX_{2BV/l}1o]o_nƄmUUoomcf`.֧.gF՘do{%7SRr-D\|[IΰmxwguJH6v2-`х6W"Ams?!\mfy|= Ʃζk>ُ xq4Islj! Xd)6+[E3C$Z bω6{Ghb60JTZ`$ܭC3Z h6$Ň$Ztm zsHXCȇ!Z!P8I1#%$Bҗ 4/Hf p[D34JucX\MiԒ=""9ۿo]mɶ4⌱ـ{І@Ao1ĎM!:Ζ20T`H!KhdPnc0j!s:'`}hxhl # A 4?#I]zo.rm!j]BKG6SAd*Y|,S3D> rpޜ`5 ޵\sAĂLG)6ރF%;ck _u8|Xd-b'܉sIj9P39d>e Ȩ&OMB9_*X %%$ڙF/*HoI D@K$XfjȦmE3nBN# -Uk9<&Ht=k%ġ:D{|Q]Zu;a:[gr@,+MVFaL9'X@TTPtC[BZ+%?րԧGP5 "(g. TbU|M' l 2B5 %_ؠOt TqkЅɦ1- YZe0\A Wm4ozTF)FoC5hW5( T{zC(]S` Ü` QF:!R-rˆ+K̆6I6>HiN +QK(l(Mj&6j|#c.L! #ͫ詧%ٕH,rzR +Z0ȸCæz+chK@ -%H`,ՙj |uZ uU9`J;0BK;LgoZ5TW/S|zd~u#D m3bk!0!ƻ>*4&$y:dH}16u 2j i<ѣyԄ0 V  .g؃JH UdC qti,Ѽ<=%f0A 9ːfuKU;BTm:޺,\]UBNu~T}g:y'*NLg7je^Ab; > %և'<5Wggx]vIEMKq V\A+tme,#zԥ $ny1ρ 7D2&/ ā pА_Ii0ePEx8ggR nk 92vQ>xUjhjw ,hIfՌQ#VF/ֆQXyFt $2Бx9v `QmQ[LlM5:QZ|M!jPk {؋kp9{X8ga1;L6`E6àmDM Z(U :C:kϢ;jl\#8Ux!6RnLhLL7퐄PNp_lj`F/ IMd_yn >geӋ\$.lP.n2QlFP) l `M ߝM-2QG[suz5dݨ<[4zex31+3@99ѐʌ4KFÁwCRC^$I˩4Ts ]a727_ LtYU.HwTP=`ǰ2]MUZ@z|bc(!==`oުfX}Br;?UtS xmN Au?i%I*aX 0p\ sj*JC$ [F{q{ÐE\kn8׬7cX>jt ￿yXͤr\nU 0^Ff_}nޛX}w~+)no[] dI3^\\]6Շ'/wX)`v`|$UՒG۳llK*[UFz!yO1j>۔&~ܦ18_ r9>?ž1D9' +_l)vrbR_?7}̏ۇwpe5cj0>w۱c=UazHN lF hPzONStwO!'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN ru~v9$8PZFNSty!'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN rd@qhr``@8k۬w!J% tN b(r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9N - Ç6',rwq' tN \@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 t:N/Catlh2PSu^_7_|Rrw?JRޘ!b@[)-p0RZ6K#)=F+Հ vEWMw;}+Dr]]9͙-s%CWW5t(!:AZx/DW^Kf` z>\+~_]>c(&NW6VWuG V# wCٚ]J][\;+k ]!\#BW'zOW+!wbiRGwp*Wg:;}Zmm9,ofE%Ԉ鱖cs/?h1_/6P87cڞ=Dx[z6j2m2ʚ>ZYMۻ* z5gPA֫(O (\@WSkj1I*Y5[Vwh"ǧ4\.|my=K+p7%v|[noWN) ]::bōҕ.U5jb!ЯgS in F7 ׎8\7 JI7nF$Gwt57++P *wB] ])c`+l` L"#^;i)Vӡ+m|6 g+k ]!ZIdD) ҕB!+,pּ+DNUM#^+`2+Br0sWV44wutRWZ6 Bj0AD{S+o;R vr0tp ]^)@ ޜ1]ww\ϏvC+ ete-zT{;֚~) BNDW'HWBܛ\홽1#nǒ=7Am^C%< $YEI%k*FG2+rZ; _ r>-yjp{ZT^?{kMfl5O1V(R[xAUB}^jj{eu5Rʇ\W-wsJJ嬢=dC6p˪pUW*1_E=p~Zh;^h$bU!Z>kPzɝHN[r@tԃ+P yS+yOW+{+$}7J{-]!E!\P JwBHW[;(u݀BASR9+B ~(t0C%S+τ{t(]vS ~2jL2F+¢gNJ/H7nPwBDW':++P +^{7}("z҂ )DWg gVFㄇ;]!JÉNpۨv|8W l N0i s-CW׈WWњ)ҕΩ!+|vCWֳ/#JNK]`#3Վp ]!ZNWҐzʽ[;ڻ5G^bHF %ʠہվEϥN T|0tp ]!Z+NW +! oӧx9q> 1ވ;[:ih+fC44(ʓiՃwɽu}&:٥z[7^)3zvjm v(YTnZJ? Y ?OCAWj oΥ5恗a.nۻ@qsj_Kz%uw)(?OY@o5WeQ J"mơ@7+Iꔃ8f 4^dYmVNyƍᬬ&? ٜC宙 dݱy26֓d Wth|QZZ_Msӗq~+>]hסIƱl77޿^fP-u2{?;YoZwkk?u]%M7|slnЁ7d.^ue;8q{ ՇvM~ M"wY6xY{`2sZ\f>]-2=bʌ<-0|uA~iP~n-[Uwo{wSoa~AZ /JnSJߏlgŹ 0}'0$ƣAABr|~΅=cU[Ζjl,&u{{<.Y7pF^#t&ըMw=sP?s;|H=ca&h|rڶKYTj::A%$ wXbb5&<t/}3YlkS=s-jR^jaud vNY*baҧ L%Ԛ )uH҇YuVԪG$X{\EX&kUZAyBHV@ɰ/Bfq'+*rfkxyzZ\@xrxZxeEb EY-/7 Ʃn KxG.'\Zձ39x'HHZ/J) #?eR'(Ȧ҂i:s&&^yDDzNZ!N^:'/cܱtZD)pA'%s3^RHAԹfZ$MmNYIv{ #xӎw8ul]-+\zkNRafnfwƅ9|U/v J̳M=ք˟TQV z= y$9Wsym_UkUMn&XvŲ)N:JZV1;["m>D-TNuN A;ZEPIH[2mVܴ* & M* ʳevіJ1/%Ge)Rmeyc2xoǪ38ϫF=I}4ڭ~BlnφWk|y3/yYs"sh/aSlG%1r贡Ư̆ ڧk'ߚr "TTFb9E 387Ǣ(Y 9Nfaq]skcQ'oRO[U%_h|lWU㳄Oɪ4̙ b֊ R%Oꑉ=\TÉmvƆq!!2 FI \ KF()B%TØ2ṫyAcY,& 1;2%1"15^a R0.#r^L.@72+d "̕:3;eD8 E+}m, 䙘cr kť#i ю383⧋EM\ggd_^"^$^|rFIeŕu0&:p \^J1QbN__{3U (lUZ^/WCG>;8e:(hQqq߽jw pLA6 ^drA5l{kqѷl0bn:%u@' KEeM`M]jP0I휲 q]:kgBRJHT"v*q6gT)(SGIMg aF Jʒ'C;@'rq6_&uATA{rVW'ڶ~zr::Ħܲč& N/̂pQD >F< ɘH+YK9Ds4%QVjZ0sx=&up|91b "$fVq޵57v#翢x@h/M\fSuuDIi= ^$‹$P:Ԝ9n ~Iʙ++ xX!Yĝ$m'QM(PΖXd%"'5#gO9o}3ɇmʗ_B uk#,6 Ҭgɐ ۍبq"RQ_%plLvכm F ğ]Y1ZLxy>'] oȲPZSNDց#(m" ~ ]**#!A0f`jՠ8(Y˱a3xⱡpm { gFCkIL9/^PDwc$a с5O8xDzDi/tork*_s+6F- ;{-9`lG&JV;ճoX0 - a" yxkcW8F'լnџo!|H9HqH!QPV C+(c9X!'jzv`>M[oeт8cgۨC{ {EDN,CQ![.6iRX$kV>P==>{M dẃֶA۷z% AcjlGؾ4{cJ%W^jx/?nq7|<+q7M`? 1E]R&S:rNw`=5z} >z#ҜE+#$s{!s0X0i1M CJ]D(B;|HA%cPQDiQE}Q:֦Fzٓ@i~"5Aj|ь=.Oo5~Qs:}0xw1}^q7- >x ݚSL<[Yy//rQQF1Jl0YUGs/BaCmRtK8Z7;=y7,k4hg*Q>lFn ~zrNFp1UƨT @ H94ڼ7-k_uDî]!ߡ6h!ab{ rABP^$]{9-SĠئBc )IfWĦ[.\$ sNA}3"RrNSk64#gOryNQt{|*5S/S/wnNȭ&c݅i ?T+ xn1z8dz^]RtӜO#;jyo|ػk#C gGa;`1?^\m?G8_жqc%_ν=TѪd"],S*!jHr6)d1hhw @1Ȫ 8VQѝBFd(Pi6z0!>i# ns&@]3$ug;g3E5a/sq'3 1JQ ` iD}J*%IŘtҦCt,wAQ{e'Fџ\O?wS~S/>>`<[\LOGc˓W&˗<?O cah-thu50 5*Ǔ/"+B|>YR_N✌WWt'L!; Bղ1OR~1W0j: ӟ&jʆ߳pi~Jzru(|aq̋ff2'xuWxEϢi}'Szq zh=7j<`]NwrBx 5/^7O2bqS]25K 2Ddb+&.l4;ȓWy?Jqte_v1':;̣6vDkSd(7ܣq|,bЂ{*n:8~M1t R:XI4&\uAׂfY]ջ|n_+@Ř:xո>yyו7{ \*l[ЖnenǏݤ)f'o>}.o})ucξL.)k)=rUUZKmERw5†3c$,azeG7.w;]1ty]ˁ&iYHpUnګ+'1i?]XقME+?_,iѼtD ^z3Yo<7وQV/CRJ6$J(Hr KS$z׍>xհ%@I jԟ,z ^R#a_9|xtl&Y "褒wQ T!Ү,u=*/ o*˫gjnfq.v˕t78P3z75 {TM' E&^Ȅhॲn܏sgu1tQ䖎 6Kօܑ%ءeӁ2Q;{'v'r"B "rA?l+&[4%DD#t$g#tq79L>`2rRQr2EV;zCA$EBpd) ^dlP܅յl8ܿN4įTt:(Sv5Mc+ ݢfsq&ujtO%:Wxp.hPAqAP\D {*-] l<$`)*QgJ&9![V _G4 _(˜J% &&{| H -?8T\,5MR;^`%96ۤX^k5OgdĨy&WӸ~B-9S7,W\OW.O"tLd2جry K(bScNbE5ozXx+/_uٶMg-eRdiV%%i1b c!Ǵ8gȞ!X4Qa/{<:Zӏ&=|}a+x ڀˆzQo7!pIXygFWi.=&CVqfA`XB7[y4ӌWq5# t&XdrFi[paȧ\]6 zcZ\% U95mT:[*=8 w^*K Ԑېpjl@iUI %HYG;P%h.X6`8%f:U+KWnٸ\?F\ c3 GmYys1f.0~-謞fOuS_HCQ8t5Ua9jX\i}. ^Lf<|3Ey= Lhx9a{\BpdR_σ!x9._[k5 5g.9]b{Ţ%sKg|!Nc8oqQQx_6?Uтd;Z6DǫcAa%R8hRh෺ \|d́]fmz]H}}2ښ#VwgɽngW޴?8Y7cTK1ofxqv>{­횃h9:;9/F˗;G$cRvcJR=d`c2ȭɬW5E- [|<~^*͋ˋ2gmLؗS]drS \U#i- %x߾U^Lk77߼~{o/˴] 6Hnog ?"%'>iJ[$-"i\_,oH m2]Yg-`c ]1N$xdqR^]mL׾bNmm@6\ [%TҺR[pX%o[tX _ߚˣy_u[lsp]}lF^#;d:RN=uT]g#|XlB⬞U1s)2aetNK&rڋILdm,7F,MIbz݌k^=XO<^]|BD}A= @u>a,w^ŸnUΆcߞeSzO#Tu>d%g:D6l~}?6cH_vݎx;)o'tG-ۯw8VK7ײ~9fZG{&K@FPsNe8Mϵ_WTG;M[聣WGU_t˫7M{}9`ǟ\cl~.og|?}Nj-OvajtPO]4Rn#/v,Wn }ubnYP?ƮDj[od+xo|-/8^[[lXqWmǖ;x~hW2>3Mf=f:6)^ޝ]"ram>oR/(v.z-iopEW$ײXpEj WqpC\ZO>HhpErW2:Hl󇄫*Ş+H6h֙q*9脫)*"\`.m4"*\Z#BpmYn4NѼ:0갸VછJuKzls-ݖk HEd<᪏Ж*uB6WIg'k 7E[g9Ȍc%_N^d  g QjU% c4[8pAb8ې٤U*ɔlgV*wG{ g/oXkeū=GCF^[Ř)2o\0Yi7(&.:E/7<_Q-Ti,sĥʥ>/97yYSk&(wװr"Hu (n `BHpn iP2\ #>I-2RXpC\Ic W(x+ʐZ#C2᪇R:i΋ Hv;JǘW'몏Ҵ>'& +Ϝɕ"\ڧ^;H W=ĕRBL".\\ţ:HNUqeYW(Xx$WXpEjWҘY/&ў*\\%c5 J_ 8ۮ90'(Xغ(ֺffJ gjvoc;W$D;u&\Zy" W=aB^rcqΥ6Zn /oAO^v<"=`Q>+Ug}vKci3gp)f*2F5yeK\^ ((vRkۇRޡ=bQۜB -sͥjTc4>mTl*/KP ʝQ2BjP;T^g5Fr$лukt,#9RɡJxp$',&\`ɵ*\ZT'#B4%W$Zv hZ%B\b߲q@_7j&WغVȺRf] W-zy{]qE^W6\Z U]%\= @rOY`43?3{, S@ 6Et7L-_{9m;*hV![-Hnھr>YѪHYו1^dM- t+JY3]3)L$v݃_˷mRRa-smG$~᰽bz\㫿?.|4 VPomcr?pOW/:璗+=U.yQ 6b0+Y^5/'z$CS+d%NFc\`. J*H&kMVMDBǃ+k|bs'+:P%-Yp%# > HT-H`b@hpEry4VqE*&+1HǺ"ł+Tx">h06l(xpErU4H WҚjiAD+ B=&b::P孅 WFphpEre4AR :\Ef`nݡC:=Njln*!&+Hzls#xpEr Ă+R%\WtF2J~9Skq5!ԩ:jl4'tq9C \:"'ʣT_`=>LY8-m ~oAFr(9Mk$Gr!&kI~t1c))$fc &hFr(zZZHH~>{Ƅ+Z P-''OK\)OgZr HdT*H!pRۈpWB4"R+R+# l.xx\b CL^J%dLA \ɢYIjqE*+1J9XJIפ_Pc]&wyoܼ-o[d lUqQRּ||zkjQ] !*l/۽#^wuql%v Ǚ/T?N%fRvBbiBO ^Db9ӧ{v:|Bx\16??yh~#y ~8{ ͑POǯo Ӟ@}w/7;:QkeM6ΟԳ8bG>ږnh_yJz]>fΖ G׳-eo?\.Z.eoFUoz%7URvִY9K%H7t=c+! حj!ԹZrSmr)fͰi|}S̝SHOJC( B%̹I:P,0e*:*9L2hCwT*S dK6v4GIhSBׯ. ji j+0Zjf%z亦ܕ)aM,=~Z3q5sƖjh4U4jYЂ>G*Z)y}6qɺM(KAn8{,j2 աL6(\G cUlnc0j!SfQG'@Ȩ+ЈK*#Iuޜ|*6Y\^u*L%;KT};D> RtޜyUUT/i>s<:ZIͺaհ)=Hn$]Q>Fk|XvopR=cG[G7hcm V͘QUB`- VRjh) AF5KP!RXuݷn\ D@K$X'/V[ZQW!5*dlzaCKU+ܜ-X֠Dt(X(k! e 5P #o m${|g ∽}Ai!Õ>R/hoF.USw%4' ǎ1*Tei58!a%wsƣvf0_mwrۚC ިa}XXQ L0B6#&V&3x:p4@.A0w%sJٷƮ2AVa1 'z4;E. !hd5B5 !XP(ti,Ѽ<=%f0A 9`ɐQgk#ԭXw}v]8rVL-V  XF;KrIn xC4*j]J3ngj `ʠv0F%\) GK֬kdž@A5DC7HX] @N%m1Y5h3VF/`:#,o@GQmncP:LˤUUQZ|M!jPk {;z3p&*b,z` Aw:+B ۉB@wHgY ggMUk$k4άfz8XP)FC;~뭩+ ."1 Ғsd萄M٤$຅> f%7\qkh}0{<গ۳u,|ig~ùIv|f`.`5BgekOek0v๳šbqծcXs5^kIQ3yhY#h4vfcz R DŽQ/>ywl6Q0Lkĕ7cȽ;j} sի[xqCk\a5wږ~~󋛼|6OARح6^]Bk>?No?_Z؝ޞ&Ao~8]f3}۽ٞ|M8QODK~cZmvO]<ooStSCpmDw ׸֋j@5Пt85B%,L\"'[tJn ,@ X@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IgŠUKELXb@@Oe$I=$̇6Q@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $IgJYb@ ,3-bNK9&LIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ |@ 䗔藓 @@kI?$P:Lgⶦ% $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@z>I;/.)_x}}xokm{8w'@O-),'DͽK@i~1%wϩw'l~vz?3]=nt0>]=eTiѕ{]9NpŻD ^TO\o.(ûO^6'6W5}~wx;˳].^߮PGCp졶b^vHT`"ȭ .~]X FL.}zf[7^/ݷ%I_~❹{}uR~y<`~LV!/7WvA{?vڔuB^|xWÿ|%wr| ni3p~h_E/PGjTxBw_xq'?l=S-w  tZFի?}zoR::x~խ~Վl7Ml}@ݫ7njb{o͵Ԏ+΅v?g}O~bi5N6-F1\gmO]1D՞V363Wh:ީӾV6kYe 7~ؽXejs^+[7Kͯr,`4K}s;[P/oWo޼W-=nKZ/ 4/׃_|cxfm_ecr5(Ŵs5? !& fmhܩ07Z귫 u?[kSOt .?G}VrxJ-f~-釡|j[Ң~`A+-iR>-itJls+PqAtޘõn)th{t(oլ+5nAtŀ_ ]1BW6=y38Q!]yiDbjb>ubd!]}RvAtŀ}X ]1d)>uJ8saAtŀrU f)t^=+Fyf]=- 2ജwߖq1 2ZVӕb|g8p7]= wՃ&a({g?c^GmY]q1tpMZ ]1ZNWygHWd?8^r[o7:ƀR~h}utm#GE.b`{[ ./ښȒ"N<+ZRѫնZq#tES"'υAWᓼzuT { 'Ӟ_B8ǣ9Gt\*KjTpk0И6`mڭAl :¸;D</E3(|jNE}(SQc3kI1HdlH(Ѧ)ꨑe##iY Jk3Q#5"i#fD%@а ASR &yp g7uF\YxrYڧɁյ֓{9Ӓj~Fp3z>qBܳ n\ laR +A gĮL \eil:\e)k•LsZ%gWY\6pJaUIS߽AorCgdǓp\^UΎp;IaH?H߿߸1yP".D'H8y@_@ͯjWCnWƼ#مοj8.9h诖G$w8gZ"D  *Z/cDE.Ԍ MIrﯽOپ1]w~~1\d~{?GNN`)\9 @rz ~{ۯ_N-\>1B+M<4X ]\H]?^c_9j]œ-ns; 9>+q{7<ǫd:fG"5;^.Ez^W~PwjE6~Y\?x%r *"8yVr^石5Q? (߿4+(g Co3l.q2{2 mG n|b [xs/hxb ~TxI%{6x7{_]Gȷ~NcSƅ-U} ܢ&?eH$&8㜤@,\C2ZT|Dep QxMxL\{+0L`A{b!QЎ(Vt MRH(*5Μ`Ԍ(COq Ikm+(Hk) o47LTӎJPsXll.JG+ЍqMl+pi kUj] džphWhպMjМYerFYDKozݹvi dՙ}2Qy2ܦ(/z6]DZc>*F'vㄹsp M8Ja'x5|jgO<Ӡ5*ݗ/ ?`0v^TcXiY\_by=cm G=sNȢ9v| wlfZd櫷 73fvQydwKM > #hNy۱I_;!$~2;TflL!L(YrY55}?9yV>k;%5,թP_A^(wy+#'jo?M0 P%q#g`_]*sC8d`/kq^4vӔPUQvfԏeK^K-}is I,( 0=Oρs9p\xbc}a>7dt^@B; q5M5,'0eZ 5K ENr&Y0yLV>4Ex]՟wnjJ,KF`br0:XBq>o:Z$ >$=%^Z}HLkB<.hMĒ\To$ց}2:jjgRB$T"ْ2xܔghʆLVL3mԈGːc"I0U[4"d C6 %3֚zKl 6`r#˜dbtZh arXg>N{&#څh 0` I-т$D,=&ΞRz.bRjen@'m'O.0<bsD}mI8"eo%R֑qi)jsIR̢)_ʠ0E'A˥7 H /VI ׎Iс(n,p%>q*q)BqBZi\6棷M=Nn^V`}fb '4x1e{)ra!*{6iaFnס%E y0 6}PhFMԔ6`D(֎KSv<6Lx< O_=al#GqVK'}$c11[ Jn|>$gS7X}ab?   RTgsɽ5:@<pY*FyNG&Xr8iW~0 WN⠎q!}J)vTw$EQ:wy;;3;3;BK Jd[Qiàpvl.y.d2I({5:;38}Sчޅv>U=x?NglZ?>6_RҪiMz!δB7a ]}ËJ!K'%.Ti"ܜN_3ϡkĝ;c?X'֭.Hil%(]@$h Ǎ&G;8Rj6H2Ʌ2Xu8_]]u4mCHIťJ!=5kEJd Gt 2:ϙTr/V*$ )/!2A1b SHu@ID`" BFԜ } |kוM٥ו0&"͡c7&"JxYu$')綟:%bƅ*~o=ijX6oD5:x'ES, x>U*Ύe e֞љbLU ;</@Ż(85ia^a \ w⹧ ~aP[ ԌdJN/¼jӥБ۪ xͩj"ZNmzGH^R50yR\ӼwtO56SܴϦ/W[Q%00gR}+@a9<;>7Cm$ZGb.چ!jfYބ8< V0b:Gb1)ZG%hI6W"Z"%sHCsJSχErrWmAS#~3\~2պ#ڠbW;P80R74Y\5>bLqPB$PyXMͅρH˫7xmz}~yy7y¿LK6[@c |?q 54*< ŸRr˸߭Kq݅YHʇ -@u`K'&\O"R%N) X8CsT1J b7&BbA+8G w78,KYS:` p.B$RTcƂBh|!6"<%[ ŘSY'Hw^砂dĎA2H A;l9xw6M:"Vu;CD<VO I q5,іE %3&X CqiXuܩ^KvZ=9K[I b$-!ƎQg\TĐ#D;*"p)'yCSqk`" #"($`rX+?(ј'=8wˉkJK~ 2оu hԥ)|t"c+yfӴ9LIș?7}v~=\ouٓ;uy5|Tof"%'1\&ɚW_&^bp}{vQ"~P(L.jY -%:k,gkI|3T1|Nx vλ^6hgefvmKQ-e~y"or^uUguhޛ4h-u3f: `E6kt<wRCl,.Gdߟ|}Ԋe׈/fh~ڼ7au.f}wʠ$*[R2ii% Tż˾*T,HI$cPuqdeG(V ڱdN"&ʤIPC`BRUI!D!eaj3|L"^ˈiDk45[!-llyIeR2FD^uwsfcpc=Riv).B*kK(9ByPCRP@.(|8}]`DʩR ˽)Ps7FТAݷ\n4<\x N'RM|u6<3y:a@<7ۿ3Y7[$E%v@Jhscrw) (BsC*ohd>Dh#XOp).}&x#32x5)A@v9c6pdS3aƮd慸{/i,F1%ɸ%PXgtP/~ӡ4à_M[$XIAy X"P,#q h)J YڔX` "r:/ǦFː= ֒=HIN0+D{ACJGt`ZDrnmGCb ΆµIǵ; vc,m ;'1y!b 'pP"bN,>JBg 4d 1- ( ): z@dJ"_bǮf戴GܜH%`**@ "*Z$@H lU|Oϋٰ>+~kE#pȅ,Niͩ1k`k.PHx3jg>M*ҔaJ[MxMT&q-LP\9׃05) W >QQ\5,rg~7{vn80l%jŸ[$+)HM?c۴as'ORSgJy=J7-`Z{r)<.QzETlPE[;ݛON{$QtP&-r;=폧 @IC~A(RdKJ ,ӝ\w1)\er!#1QZ+Pq.54pqA`vht6 ^`,(;Ȓ` RƎS8O6'Q(+X/R``-~v#>2 k9"WބѰ' kbTZvňX]F͈RDXDS21,2S C2*2J;FĵY%ĮZO>Tf}<鿟qsdy݉o[kϻp4ygE=> At6h;9w-SN@Nk M>x16›f`q$nAc,6؄M4wFm6*GT K;A Ut(p6p[?OFilK}lbKhmvh+:4KXSwj٣{ Ŏ]M2PncĂqJ,@1$I2/H*DzUR&]moG+jFHHXør WL*$X~QHQPʝ N23~G^^`oq薚 ַ7.tiCPKDz5~n.8ڲ4SS%& Z OJV9oHG!XT RL=ɑs5! ȝ+¼LS(Ki"e6ͼ]J"'klOR)rfmV y2ՁdWv$çԆ0i.'H 3:aFrGh!P $ 6͌7A1@RҠטL'P CZ΍Wi#9?,wg|h)E'd4 sݲFԺkhd(CNIQ|4FLteiNDpf Z{ "p2V# VE1Zng _W;;P[ɚⲴB:G#zǒL p[)PlxvXv) )r4C{ dV ٶeR B`1%dn΢ءu&Ύv?"gyE}>O| 1sbh22*Q p#}lPɸYPMKz-p;] 0%i{ε` kPlڒ=G:UQ4& :Zav^'aR+HbRP1Xȥ㎱dcF 3lL٨1MˋtL'9n;G1?~2M`1(]cN Ϡfcva&#DܸUH֌Hr_W }Tt"O4>1e_RbCP'k#w| ZSl:r;%PxI["O"V_굼]tf楉 ow\C9n0Ƕjyv0J.' `[r 2eX1+6ɯD/O7qO쁻zVvWWuW˸IL>]auWO!gG䮊*j{,HHio] Xlx^dgL(Ug|QQ|G/jr  `pT5 ʹRB\VBh嬨rVTΊYQ9+*gE嬨# FYj QrVHrVTΊYQ9+*gE嬨rVTΊaWΊYQ9+*gE嬨k嬨[9+UΊYQ9+*gE嬨rV8rVTΊYQ9+*gEYQ9+*gE嬨rVbMk*gE嬨rVTΊYQ9+*z< A!Uɞ Imkd䓷K1Z5 O`*a͡@ e zm{-2sP(d)wQsb?OfqÓ ;ڕ&;<4Rh@h 73R +NyG ]ipg`b` ;3֊o >Eƹ#stp@ 8l3uQ(X`1g9:=ZH0E֕8p~9eTo,<R N1C&a6ڤ%gZH{ȘiGRQuk+L  e w)q!H$wHF-e#PƖ=6,Be]c?Cj)=`T4H;B5xD% 8;W8ؤ_P:,]&XTLĜ2&yQ2@#R$i#7e{/)lWb4UXlLJ#NȘ3VdoˢȚ"L9YZjc}I 4)[վN[~mB[%%TӁ/$:՘WA\H9duXVV[#mu^ABX٣Bus9Gݯ˓g/.sN$׊1m\؎&AkR(f)ӜI%Y Cܦ=pcp[,F[ʑ읕*Ks>#n^t5N-/ekiJX:.w}~>Ær,})fpؼ o2;z͖6冸]*ڴ(A}sJ実w4B.QƢF74}2~{z7]yQַސF64Yhzt|sÝFnׇBش~˳E7ϼΓ- u%>pEv[Rc+\>r)oKccJpZ%,|[7sE5B@ܲg$|;^ ØGKO?'>]1%YK3V2$IV~eSp~I<KUtmz%1@D%c"k}BCo6G2toz-o:FJ]aNګ߷構3/φ,?=M.'24i0 6+E`n@84^LW'7aajdӉj} e#U hk\L`#0w'AkO d&jBN`Z2)阭7JJ{\H uDjZʛʣL*dX"qZ짴9aNn.Ni!CG0 1bdJdjdγTcĈT0gz3qk =ilB1@t6xNyOI <fj,0Z $i- eWp -l@LNE&e 'w4 =C+4Jͺh0*S^rcbC(F5>k\Ԣ}'v'4B`J:]lf2S1SPcctB`}?_XAqH=9vslYl6nG[1:kÓ 4 {dd;$ysJۆn+:z!tdox.$ҬbR26FJpVFZ ۱^I{iuvbjcژծi#Ӿ>VBH )>ͭrrj}`+evLcK͙0_ɪXVMClJ+=wI'9CqK2:`UIYmJ qsY#Y&rMp5  4Tg<"8)߯!4ڰEhtՔg13blO<>$_+p%L+Bs-x#ISeɤd!B(9F+KF5Nw^Ix4mzvqS3xT-{+e`tonlXYʜ7Iw{&`%ogڙ\I12:ctܙ8;yO _IO[O^oGSNjr:B<'%eo{>O!ߟ/^M2+s,OҪ$b  ‡ɳ\ڗJ Q=Aйm|VnwZA#Y^lϓ s8xoǛ@)1H.5hiQh:sMd wdOOgS>ZC "R^(AL""HY&HxrHSAddA7L|v#sGF'/`{TyAu ("ڥqQ vN>?:#^W-Zdv[#␔$(e7,ɽ= xqGE8sRѢF""Pr) k.e1 g&ɶNtQRV2NbޫaMO 34T:ˇ^S5`3+cMߍ߾J!K'%.Ti"œNDos ˟Cyޯӷd| Z$L yɸuD,=M-1 v4 {i= wb|UZiTo5Fie T/̑^2 NEq;a*`&IN3_Q9¤S!BwgGXբSAPW=uwFG,MN?z?Io *{<IA@, rH8 QL_sSUx_%P.%>|8 bڱ$xTR)RPpGŴ=M $v7'O "kKϥ-@2x;fx$`\i1kя,]}^+WzdsQGi͔LumZB? 77DkܻrcAPjU?i]'1oG]"4M0?Z㭦:5O;rlktjr>̪=Ӕ^Tr.|Owg7]2d:cnLҢsײfmk~q=~;]<1 U&h-{-ɼtHDt^]H)$"0#0XcTJ%XP8:oP~u`K/C<]SRYdh( VHv*;c19C&z;xKYY|r(Ǝr>Q/oe5 D"hzS^#HND0'Gtt&#SU/Qrqm]  ΊN[lB tnrCzE yn?q"cqWcSz_1 kpYLL $Y=Yތ;]Lf2OBGn`o&Bի-Ca *UsŵwZ]4'Z~j/<XW F&&9켙[vA6j Gg/h50:aTH=E0ua֙MHqQ0iN> /]]96j9*AG]dۨmJWBRk444|T.Q\.jm_=k$~/O/ju?jmTSRrkO]zy>G7'75=I7&_/o^>o.R`{0 ?=`pk^Є[MbhŸR|q9.JuTa7E ["3!7M|D{p;(K%84b qqR()ݘ[V(9>a .e-=;$2e6`ZjùDJm"P NT؈m)cN1g ݹNsJ;& 7I=tysS8jU8;LrV;)4&B )ƒO>@#ՋYYn9y>yi7S*G.ˇ1{',w?Zf<ܵШ0j/mRp0)eiZY ^#guuyk„}>rAr_@ALp. ]z=f;LjZZjoMejlXZWx|TIu{+?grB-2l>U `&^wޜA`*s|T7`HM]ɏ`K*GQu\ɐ-.&9%xQrxɌ>8uk}M{gFN3Xŝhɟr+A=b6c;+4°Ӹ!GĉvTDH)瘀M(rneZGEdQ0I{X+\$JO)4ȹ߬.!yhܝO]U-s?R]>CI` C,GeW1/C||+o3k6*xUrTNBuёsKJ"-WDJ#wP:i N #̂e*_ąL P7%s0P&LZ x$ )Ng S띱vcZFL&Z of39[nC*(v?ȫ?{Q"0y\sl@A*M#"̝ QHs$&&) ]QQ8 $PNKroDel/,y)?L/K7\Ih.>yGej%__ͥ'gyQG[~R]ʰZvKןJ홬2BQIjڦn\-i](c(@@[1Z(6 M@H1 N9ť rbdF/cRˍȹ*Kf]dBa[QI3n)3\  Z]Vn6'4pP-G`<,REƑ8J,emn,0yu94$cx$ŇyZ %^Fґ0 FGȹxLCAlq(M:4؍1dpX}*.`)vV `unpbF.qVqB3  D9ФHHىX #F5ؔ&3f#~N8cuKs`I&983"f#~D X-3슋<3.;\BIxP!xILKrOKg ) %"Xd>p`P{{6w=r  C0:q .\j< _}h▱'rs2`j|#]Znu O 2.hȭYJYt<,f`v^G޾`^k+coG.38e_݂s(&&VQR1qV{tHSqcZ+IoU`?5} SyƵ2?*ԆQކb8N/@_3/aOVƎ_${so[fmގލG-]BCwF}%R*{3]wl{payNj*)#Q:n!3]/]ϣZK v7v&eSI55-k;&ma^´d?Ϡy sZD)Z2li%% ,ӝ^51)]e7C*G 8zcVh\(jhJ3Q p(Lvr\@%& RƎ񃭯 8s2֞\V0OP`Ն`ݽ v>%zkLիDjyUzիDd]ի/5VD &D>JRT*QURD\+ee8+ڙpZ6%\ ~?'WrS㛧ɱt)xN2 eJƅ( j=#B#xXZ DnJmZogmjLeCU-/v:^Wl|H(EHX;ĕ~OC!8ӭ>7Yû7/_ثn0Ŝj" ?ԱO{-/$, liL#> u,&]-""&ppBNHlY⥔k ☕j#`*"Jc@(ŜwY{;mx+G@`T)1:@.U,o9J2&Zo۽Fbȱh6}`;|ΐS4"aG3iPmUBM[sqSu'훶 W"M45*]-@3ZO0x*4!X(V\F,B=# be}0z)hd `<) nY;n 5q9Bii6,MXǠoWkeMouLI?]Wxw98IB/Tbl>Mp3JK#1Lj1(A4X19 ]&UG9B 81<=`N TpXj@tm vE!F OΑ֊#mh&@wqКG'ɻI5_S ּa\ODh:H9P!p¸xO7NQ<"H BhIc<0xz;壑t[]-v-GΛO?$/@KԜ~>Mz{?ɀaٴ?{g`8OʆѸL3Y}}5zǟ_V ;)q$UHH (tt}(CnԟC[*ޙp6ٳ Z$Lqr#9ɉ0Xzg0?e/fb?:M* o44-~;1E&8) RW 027<|vsI_{֫B;؟S[W=uwF,M%}S+/O3bݱL;xyIb ݄wv,.Wʱ5Ű5C ѪZTbOBWssisc̵4&<D<^^VrFLB#,u+B']e 1lZh yEoS_,Xi (GP4@Ɲ3&Y5Rak9:E<æ 1jl&)sc"7 Uz0W{_~ٰ?7ϋ,_ug; Tog>o$&%ҐdjB5`[N.C /KEԋSf<=Qt8!h#SV C K"(f 5L@t [B+OԟlYZ:k! ;` B:`Y0*ʱAYwl[5MM VMļawpotv*C۩v}$ە!ۿD`^3/ &^ 0ucStF:|rLqzVz\x8DĊfBK{f8ȰDFZGOׄ hY!v9\Ym"_ 1CAƀh:)_' %< ^ܲN8ksnE(AĂ3Aya'؅Z}ʼ|$^n~u}dܰ- y@_V.OZ8ON@ ãCʫ2Y37b|ع";jOwD@4:&(fE$!H&n cPU0=0||b]+xY킔V"D n7=܁dJټ"ib$`ա|:l(Ѵ!)'+pPr$)_u,48W ػ3@;pmW?4am CS:jW60OWǥםz)%2Ҙ|ĭ$r;TboAJ('!]%!+N*m5s kuՖ'(mԆs"ڨE3Gө)R(Ɯb:AsVu*I50IvdІ.;[΃p+Nv*F|A9#cLQyB"v^{AK\-%s"D;!LApO<@;eĎs֩l&x^GLՑlq] Ev L{,ǜD `zƇi1K?y$aί >@O_)~>)m˳䵙EXTp: `Lv?9y.e/@ ѴX J/a tQ@&( dXAgk镾V.xSR  жBk}Kq)y,Um.R7eoGŷ5+_ޫZ fGa=@QukvV4wpR2 9+gerHe9޼6fA=>Sc%[/FsZԁ{W\">@il8J֊6J)+1A ƙARLUKL`lMag~&Dhjp$LFxAk &)*Q`|dh C!fH ԆT̀nHysQ , cmk 5uEBT.Su0-,6C/21V5Kv܍±3ɘSXsuoڑ!,1\A*Iiפ@ *R3E"T:Dc]U;iFgٳt5nZL/q6B^RDٽVkAZ@Z6ZAmj[F\ҝ\O8\ܑf_x~9l ۻe \pAUm֗{qx2}6nwin>FQ`0nr+?ǯ|o7h[ u]v#iZ1Mֲ +|޺\qXtȟ6jԭqnu}SZuz W;Sz `%qyå޵q,ٿBf~V7`A~࢟Ғ%M5ő30⌆53UOU׃0R1a%2t)U99kZkW@vvI/* g]nO [?mjw ?RP=~zYm)?;kψ%cI@imHz59c6h@}5/P4JeM(>Jsu(4EJE]b!X- Kyz! i 쑓`v_τOG䞍?W7fWӊigX4`BK :B"A[1]O:bv;bQgV _&ieU0g/*3gel{9.\ʐi@J%/;Ʋۘu6{!{HHMBP%E;MQ'̖,bF;Nނ.'pBmX' 4lŬ I l$VH-S :kZw>P̴s}h'Gf%-Bc̖O}`Z˾7le<Wb;y~4EIPȲVF6w2.0I ڔRLQb əiM^fz&%y^('Gm(ωJQdHU[#c3qFv\6ӌcP4BXx+QMeoiuO{]" e:?^>\,rĶd-;RA$,gQ2C D%ɢ1Tg'2Q,)JVd v@rXjJ;\Җ9nCAm1`6aVy!/BTc]5Gg~oHb!*̖&eFdAuH#&TOa3qީ Y)W` "6ӏcQ6FD9"∈[/eL2`ۈxLIg+da/,P. kXžquJ1e,EřB$Ug/bdOZ(+EkhcDl&o/25Xg3-9uc\T#.7RY)d(u:.zhm-/E R λ<)pq0V1< Z|Z,~|2*_q*v< n~dV+֒r,^Pӛ ƊD ѡ5O?8FDFDXy>vSLE4>l\zCXl0;{roG& 72ٰ֖" yh㖌 k:Yԅ5d>O699 h&JO$j&!aL69WӠ^pWOo?ז̍%Axp;YR yvKnKVerd_s}߼]e|:?%[f JvZ3cv3 MQavwvdu#?Y}/ؼ*fy3-V`ן~O\zG s>\os@|7am|VGdk:05WЕk*iiGCȢSRV]$bo';~Vg/IY޹j nd\;NS{r=G>oXݱ<@t}e/`husaܼS:T2#tVjAJDZf㬏q`g}ښ2)Y83(`TQI)jJLIB~SӍD%)*a.*RE&0ȞdFJTx㣙8v];44LZonݐ Ǟн}7ˋ95k`uX^j%E4 ,tuB8 asrNSH6l[Žw&;DGk[t] X(LYeHFU"޺l&( x~u,nZG_ܧX+FLuBͻYsml\RR*thD%P‡i_+2{:-8 [vا Tr+$I}g,i%IOZHytzWFɃ[r# sƁR4wazj:n}7;>oԱK0vR+;B:QjxICƎ]ͺ?vucǮc!9eSOuhGTΉ%+ @`('23ݎL6cP\ !VOE0&r&gRFD$u,8 ,Pe_j\b$:(z MEK./ڣ,sxuemmLR֢$左d )xbB׾&O\Aj8zܟSRd)uRZ9f3`vGA@ɍ|u싯ή dd|KCnf/f ^[ hs}GP>d,_N,mB[[=i@Òh[|ƆdcRZ6 @`loeUؚfm5%;Ȟ1`!<:j᫶?uFSuлV 3hXdAOKt+|1;MpƎ c.+F@XYGT0ȬRvg?N`ILL$TR9ղiDֱ̀ @rNA )*QHYf%/;Xmi&΁{RA` ]/B3| Jl *c IIdWPZTus\/o&<՞FN߀-l-<_]_S}RUyjӚ7uqlԛws?"E/B2U4v>Ej.cgYϒ%'0 WdXAwn“?Xuj`2*a~зX %eO~)߯L>kau~ ~jr,m&DRU@OI1.7RBq-5C]}u[[5;f6:?|u/Ә,}rUXZݕHz/vϥG\Lx-|*Έ۩;2n!Kw'^Wl]to6߱TW:>gw_lEoyAާ_ /~v廓W:s>Ct~ NnUssS o=~Y>KT..޽7e6f#mg K>w>d7wv>A-%V:ѝEb,'NrSM<ꉝ*P "0K`D:MR&"lBbsNũ k5mһKHI{:)ɻPW8- '#K)9\bڤGOZ~y0b~8_]$26.FڂJ'0m33UѮ+|5}{qk]58>UJc;ջhpE2g/ר՗IK(8{v׭hLY=B~+PI+q?b6ŗ`;,lr8 AD#$_ZOXdˣd&f?6>VYVUo2͋&^!&\ }iIVO݆ -e5v-vtP* gh7H#AB+B 3+0m9ͧ+0hIU-tEhj:]J;:CRƩL Ln{muE(]7ut-7MS Ặy.vutedڶ05tEp5tEh o:]ʍ!+A+CW7m+BmP.~t/o]!` YIp5k ]ZŮ~7t%~]/%g 85]X:vu \iOJWէI/9mVJj^t=~Lt [ZCWt[jt"Fttut%PKi oש v 4hP&A9@]ҩ.5I'/U*KFBUDH*gTZUvx ֩+ɳD|j:}[}W+1AqURPwr>J0|Ir^%N~)1< (Kռ*@\u~7Y`Nd!/跕*ϫ̡T IWA++I h hkm [2Հc[\ZГfT-+lCW5\JOVM+Bɡ3+T+5tb#B6Jse +y[кLututex˜Ae{ ZBWֽ^@+*s++bEtEn ]\Ӛmu(5x#]9!ubW}~=]9D\6hX_GW+gKN$0ap퉗֝&?05t;zi-Y0^wuZx"utut%M>Vխ퉎; l`GW_f;OjW LuHl^ϰ-A]*NnO/ٿRjՀ)(*Cph݀h5 RvQs@:!EtEl ]\+BWAdD)Y9GRmSaZCWWUJ9ҕ\0U-tEhOt( ʠ|:"2$5tEp%-4޺":G`k3]A+][*t"s+m$=vkZj'vBĞ]/.)No]؜ں: 2؉0a t%:ziףbh]`#ZCWךmdP6 ]-uϸ{.ݮ[ݚ3.Sh9Bk8q@ÀM 8"Z F]Dsc4ǜR$$80hk@, BLgNg `@ZxANn26y?Yqs(ڭ,iI-$ND2B1Jͬ-+k ]So|~Zx"wtutZl]$|\m+Bs+B:JiZDWEtEp-k ]ZNWRugIWZ[nt  ZCWZCWV4R+F`tC.mNLI,f n+x5zbet?mүĥ?[c3ϡ, XfQ&Z>G}6 yr%.us\f< E0}4߿?\/4q1u ^}Хz$r}OQ\+jjy7㝿;`Ҋ\*9GmsD!H_11P@lA3{(fh?(!W5'EoN __ԆIVC؜Fz^H'-*HhjjN4ZVoBmHY3~س>ď^оB#UؐbAr}Ek^״"D﨧eû,򈋅{{&wyu՝wT3wZW?} f?_NG$ ՟!XF^g(SFNBWMX&N*$&Em^x8X#tȉqK$ "ȸ<Mد e|꧹Jj=\ .yzבl쪎Rg(K9WaKQaqw!> mC\]-ߏwһ7e#]xfԪOM@Bҏ҄0_n4]GYޮR}֖ln|dXYE!ѣ̀*"ZrK C#S(Y%u\]a:DPƠ,I0k EH[Tj)U) GMpd]9ue iJ4%:ѓW 8@@Qh 4jCV H12n}Q;^!Z]46%nP4:Rbb:f敱!r[| 8/1xrBzVzU FE-;kIi$+<&| pI )(4ԲY]c[5Xtl>Oj-߫=tw2܉K%)|mKm?̢-Ok'wB۽~{Jey| ?cu Q ~0F1M&9]!oϽupp֟=\S(ղ+S$I.%3 )'f_s6ffm$bzRs–- w}laW 36l%;:l_@=z8pqhğ~1GDW@i]yVTNB7?&)O#B3dg iϠΛ4e5*mC/0.z9dƃ 3=:((J8UZ SmM(B9Gf*n͓_z+CERpǠA2GDV eEvEut&GOs(Tө[ HWŬfd욱qa>h.N j ϣqwtM3On]W48]=8fS1[AHPUԜ#×TMBt;a3a4z2?㫷v}M4>hsu5;Cm}@C>$< ~6Oxլ=?!>tyC2IVs%ğ}Eo,0iKZp Z?ԯNCSn赳ۼ^R7v)B\n2s.|R}#Λs\> zQ _lدP}v{{uo7.9Zߣ•<]a:WwIU)kϪ7']Nhn0WcOn֟-۰Bۓ KWXn?wWs8}UU|a׬+ĒҺ>g5 ?_b5o-hG}˻c16#Hxִ>Yפ=" (:}mGvZˢTRi<^=N#q9l_̩7?y$Nݑ;Pm3ENi3o{p&dk/zHZrcnA,7Fiv:VM-ćf_$fB. bP)ThU.B ![$nf3R4n34h34b ] .- )pI2;Y)RTTym:C.qkU.r.<w!'j^;%XNZ# Nܠ5%R,88^yۼ۷Nښ̝m1r4W=6qyVS{׏ν>{m-cL"j<\ph +H1 Y0ZɸPɹtK]ȐlCX }72d_f 2 ś^Z8 8A34?wTCV@J€*6D ЀN "{+3i4!/ِ==POx2A-/ܿ ǐI?gLl?|moDzsϙIZX @IHѫd=/YpWqH:XgP%@a':p0N1*Z! \ANQ(GEF2@LI;6lҿڪqT[f~LлZuEE}.~~K绞wEUJ6{.bY<,vT7~ɻ Kr4|oxsXY+IJ}p 0"MLvJoWvɷ_&v2ՖnyTAy,8B֠8en!yU"RtpK16^;?MF'3Eat(Y"+ DSXGyPz٣"d-i-2M(&&|EM)# ٘ؕL3-7!y Č82G xvo,}!W$U&rksIzXlmEΔD)JZseFNH/ɍMz'%)[=3($j(Q$0%4{U]cSG$ef[m/nw*KU7' [5<ƣ:(PNgeը d aܳsrp$`'x) 80esoSK-GJ8j=!Htȯtqp%'f1pMLEq8Kʣ8!o!/NjV%WU׻hj.{sPߩv9W$0xq0ý8>RuhuT\yuQ752SRy]z6=m..ъ`aONs9K$YÓS;\L _qBڞXٺnnPnfX"#8U2*GS7ͣb>ѓ>g醗ӌ핑ͽ_r]ޕQI/B}kpT0|ӟԙGgؘmk竘v0J3-JF/W`s1!Q7W%gdteLJwv$Ƿ~㯿}{L>>㷯ƽ.q:X3<_2$ F{׌574Cעշn&*+rѯ? ш!%J`ls\EғnO\L;P48MT[,gšInʤi㤦 sRV>~V4psm, !cb %##T:<Χ:%bu#5^suu<י[Z+rgKYJwmcpy$9<2^"UXչ<bg;k`rݍxjDKJ=J Z2JG$bl5/‰'jq[w^ichL$@p3Y #@+Mn-xс{(1 iڝ * `£J2oXF'@sZ/5󠭐  /4V2.j[:"*KML;R7k?I,. A ńU"`)#DC3Cb&Rg- ڣ 1P 7 sH?BHav-ֆs}LslH?OmyMlh&yYj-ZXtxm:z"t |QBߞ qvC!q6N.!2Jn,DsѠ]!eUaȕb66zo+ i;@Y"rPۡhZ*ZW|kJ<wqXU;GB'Eq8ɕ^7[zm Gk1y6d5^nF%$׼!ޘa]? vW]dԪ^\77i=Һ(/W^-bPnH]htl))a Fluf5CILyjVAČ`Dd! 唬I2ёcMDU(;i!}  _`pZ7H@AB7D9R8FF8jr![:CZ+ n$# YcnyI w[Jv`͏iqkFw˦VwO$ 4!$SR aPnC} 0]}NmqGO;g4|=$DAEe yEbD;.=rOYyĀx'y*։0tr͇3/Xr<{GE%ݓ&>Xl1 4)H.AĐ,C'18tN{\y.+f -]G!u/PԶfw [KmMdg.ƩN+Qwt„Gk&sM,jʨ@]%D~v܉mXjJ[oI*c%DFAY).$ΉISJ$qtvp/3{QmßyVaF˽fR{9xZo`8Ŧߏ޽ᛞڠJ-]jELIDB+nNGů Pgc8>QfPzUvH[?Qr|i`%x7ٸ=βʚ?Ysz'OG7jq`ʮ@R,pu~oj AM\%?EhzXT‰I,ֆ؞= GHOFSaGz{^=J="f' Yr:L`sp9pb(L{|x*~zq!FovGUQ@Rzn69ԓhv%'Er%7)ub'ZVHX*źђ#b]\/RXۄle9; ckEf]ӧjFe^iC#@/ tCN-vsՖ\=M9 jL7L鎇ШˮFԨ\-͆[ytN^4P?g\3AM>< ۄ1o,w'5IBņ*{a{N%> FgBSGżܼ\tdӴ Ƙ'>ĵS҄]~ID&ƭzEJk]:0(c1O3~A-pN-@%&Ҏ8 XCW$ Bn Y[! Nl Tt^IL>{Ud52rY[ą{/893s_S7"d'V eDtc:Γy\GfCbxKq+ȃmE_xQ}zQM^ Soh|k ;O#d?8>hFGߟ!DxrA%KY$#˽wڲ g,* E~U2`YRE[IBCW׷7Oz][ {Kշ_U@k< 5[W;x8o;(-H( l{Z@j,.Fb9qzi5%÷qn'dחzJ-saʓ^.]f*-Mzy Fɕ0:zًWID] u-C#8x) hnA4B d7)3>x!yb_֟f_^/i[NuLH35}M4Sۤ8.`(N[uA ˽%C䖸YJIO7{k4wxG@ODS)Xs$GR(GIQ6ۈ7sƨ^IUdNkn.(J# IZ!Uݵ4.'2@,GFI1" B PZˌfq!e(z^ \OV K6j[26%c{XH(c[YZ;Ϡ}1ed&Y9mb⧃ɯv0}'\bHj(*O* "MJ#/ `Q`JI'[Biʞ s XN]kor+oU/F5p,o ˒BI^^俧dJ+Z| M5% 9NU,lv|%i YTG_pfxqRca޴XXF#o$Ô,!3b&$0(D^r#NVmlHidɐ%2D,:bkrdKIp\QfHTKNz<͇tFˆǶzfD52Ȉx.%EP@lM#"t6% zC,* ^Q[pf.`ɍf*ScyAKI2ibENmhόqvl‹WXgoZ-/yG^y#ZJFSgXL%9\ -KIP1/#/f/8>(p|䓣R2j5lȈfzTJ;q+q ]j%jk8s?c[6=- 굨=NXB'PaclbZD0v8MvڙIA(₍6g- %4lj4LJ60YahXYb{078k,t=5Ho}Y:%Mi-+w4t>ڶWLSYۻUɥ:Ah;/hҪ1ZQaJjdLtBFg\!z)87ZnoIT3=MW?v%ً5Y̓#9b>(ƀ&)$Jlt{5(KUQ# $&ɉm{P&%1JN(==Yopִ]Lm8U.1MI FW 8:EJ3p$pNF6bCu@MO? `"*#u1" F#:uS6c&b z ٘'yҝ6jjgʁz=&1ȯ@V%qB`8z!*hӛNl9miCOGi6.jSj`(h+bG>e|pqDDLK*fRo*a5v:QlJx1(3vȾ.d,F22. ʎџycG7 :g8zsk g#쑻X.]eHu( &ZlEdOûst>t Ӎ9qw=<=,LjL1v u wjs>Ox֏QhKr *FJj3nЎ)̡RKݥ0#g֙}xc,1^47 O "ܘ}7/ٴG޼E9\M8K*2~ Z*w/\}йٿj/Pq7f@r-v1Cf@ I,Ixfg3s]n`<<rʅY*sIʽ,t}s%2nyy˽R ͸av_u2e~i739ίSn_p+ۻ!c՜|O~y2m֢I;g.;shg@jMѫW`C}p}X.iXhP: RЕj[ѣSi@t%ЕBW NW}tl芔Ds}z`&U8]5y~%5ψg8&A>8/[n߂h]J`տ,4lxyj%ϼ\}8jHVӏ7k"nEVۯҐ,'ߝL~Z̻Pӑ&־{?w;Qe+ Dj:uvYc0 y!oawRֵ\;N໨k)v(vni-ΑmD2,x yVm[ P&p2Wdr 9q1{!7$`=q|tPҕFjH уתUCӕ$#]"P~@tyȾ+A:r3fL#]i3ESЕ0dG 6~#]9V)CW .&jhGW %Kҟ#]yE㨺>/E55 -UCGqJ)z ;uɛ>8]mÁS -( :dPo@WzmE/_Ӑz0t(ЍК}ktP2tѳŧ]iB\Yۙ_N~?=;k-&/Z2\'yѹg(Mij*s޻%Z&Uψ?B"V;k#>-qp]]_L 4[1iw(b`m/sxwːNx;f+k3Z=xcI;7{EתJs/vVꎸ'L'Q[I=Vt~Ax-ϯT##,v!D;ޢ[Vq*vZ}8k~LO0on!ćbV~#[ _6{i \.O/e_A'ۅzHK vեiW rИ)X0*j\@0([E!ӿrDm {vZޝ^s5-}h%_["iwz[ Հ c5sq5ZZQN&j Y{ -k"+  1$ dF 1fJYQU`F iЭlԤN5rU&̶TȢEV\[6:PF\r6h;rlDkK RHIŜD 4#{[QՌU:ɬD+]ۊb*TǏ7RҥZN92j@˙.Ib @^lRTK\Х dŘEȠhĘU'oTLTk1rb`l]6W4V>ƌVn6tʶ^m,c"*a_O@ƹZJku&_m`kF'Rz87BvO:h$a\_T7u!!%5B>GaI6.F%?#kPJج*+!R(<E\I EPwPc S. G]w7# Eqp\.k"|D 5AXKjz'X01 KR +_hOJbr!-J=d^ V ^bR))ǐmp)`' P>!冠1ֈ y%XE:מEJBH#Wx0 XQJDcV^4”}eŷxQ]3IDtX$>րZbNbe\UVGN{PW5prX(OtUhrJy(0U1Qm ʛWCAkYhVYgTg19./*QNmJ8~uֈ7̹mP(h@rixJ-YZP |wOUIxK: Tu%TdaP42pdL $iB]koG+6-A ]d?,&8_f aԊsITLE"SU=Nwid<$X( ꛐI:Ci*V2T Xp6Lv:@@}XE[PB]Q[=ಬR uWh%Wk@ e º3 Da( VCH(PED&TD{#]ozg-JC(ʨ[sV ,,xt0wD `L `iƗ)B`@YLu>J@ZAA6L:A@GJ LEwf+%RI9n'XT5oڳFH<%d~No& JUu2R".5>() ElD32-([jkIk Nh8(:$Y, d`niXEjE},BZI །/Wnnz1#.EVz44U ULl^vNR'D̿h>`V9Ηt&^zlW-kvrBFՂYhù˪>Hh>Dfxs:pnXzfa:Ҫd :!JrIJ2Ba1%OpHv9k=/sP(39՜ HDNW2^i C.k<<@= KH`Ge YLZ[]x $nm }KEUd!T? y E*(@6T H VE?`Ye>ZbM=A!%DB>hwAjyq:EjC$*KtPK|̡P}ֽQ@R@"Pc*2{"<6CܖSBEk .IҎa<P9@1^!B !=f br߲=V>/ (DhQ:fƮndA$ fj2RA ٕ P?AjDPqkU{P2,,{njpM"β@ A1*'JKVbnBzОEw64IxTF+ j@eV o[6RS饥KoU4^"wn$BG 4/tВ&ZJmm˻zvW7bҵw`Yv˴|62I_Z5 Bztqf{trlѓХE4IvPp$JHu6tk*KBq$RV<4z&fcxݣa#[Tf>Xe5n(!/[t=$5\Qr}ܪ/-h:)Jj]d*2(eFAj0F-HOOdPJwUaÖ Vg ="($'Mm2X{iP'7 B`~zAy/aèIP(EeQAR܌EEH,zށ U l?t* (AFW64EP+Bbu4CS3AQ%#pw{q|\o\ ÌV59h4ẬT E^:ػF(N_QjFZoO3>;TCl_tjn￿| ^|b| vy)W &풦Shݤۛ;ݤr| n7OgX./s?9tοж{)tt}7jҿ߾cxΊ8~iusu7f|m I ڸjl9ZaHLC{2y!Xɱx dz' D;؞NoN v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b': Pʪ878H(u`' :pb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vHIJ|@N L P@HcwJ tN eb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vHE@R8']MGJ# t N^Gz9Ҹk"j[r ?g Q#ݥSmfyݖWjuͺb|^&o `ܘgnx5_7ΥνI%7' }g b/!D *.8F1:B5ilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ؚv:ִO+)կgh)zmoPhwi.Ge~=[>•dTrph>z']5;E'Ľ3.mݬ㮌u:G&;Ȃ~&Ln!zHرcG.?|7/ׇBU$vI|&Kuh2}PZ i;3i(L'HƠp1+`3"6;]J瘮N"I]tb0t;uEh2*+ڻ9 "F uC+Bձ [.1pr0tEp ]Y\ "]Up+J׋+򫈼~tP[Ct G]\;vB{ ,ETR^Ǫټw/03z\Nيoa<:ڬ^'W 3=v˒q>[!_>Mlyu"-n_Ӣu99F^ER֍),M${7 cu"T])}p5,}m*7Yy 4bMi'3T^k&>Jjl?M0\D.'(w UN?蕿ͦY^]KZL޴oM?:Yaۛ9xӇzi5P/m(ZR||ZȯZnlI))ַ-_F+hgtl }KtSG?(Nt\^.Dr8_HίK-iWY/.0V BndAfE W--G+U4L?k% FRiԐCIש\`tk (G-Jxu1eTw!FHR Fl#ψmf#(ة^.Op=e}N=){[\M96+Sgj{ȑ_m8n.;{Ηؙ (EG3xWlIlenrL)]z"<"/)&[a|`HK6942(SXetq=):;g40cCEKLy>h\)44I%B'{aq:p,bj.g_w-xqv{5 A>+dU#DO/g+!v{ 7&yꚡ[]òh/F-XD }Ritq`S&Tс$]S/e1KcuՂS0(HB@ 0$@RPgmr٨Jس&g}v4"/ Qm>9۔Fw ־yy͏gc=qlWWVHeh9@8S̩H(rNTdz%t0ektr9d=޽ru>r z~*uYnJӂ[d@a^h!1( ;\AnO%+B8)CH[hKQt7:6$mgBϏW$T}>I""OFPB[`7Lقd"('*ehEHk ))}&tr&øG^&"[Ǘٿ KcfD찭U[_itW2ms`Uy0DZu;NCf%|F&3*a` \dc9]1# `Yns 1<)$Yz'dzQQ" rNXQ(L2h+%s}k1{K+\jM<(`BT` iAlgsvc23GΧg/[:Hg4d*_x0'vYpLTq#?lccDLOj>`~aWƵ; &faN ~ތ5Q{(?!A P7Oxҍ>p|</<2BB0:V(u'JB.keBIM1=]inE'n҉]t[{im\Gv۾Fi =h$TPShu,%ɞvc?W*g1"²]͈|lAϏzSG6RƷA)Y!),ym HveF2L"kK䘱(`(;Gqƚ4>MCkg_cj^k84mfd5}WۥGPZ Q$ M^谉2ξ",wLik)S҂^JUnl(S)wW޳󯾇rf"Ma\؆383l] ce0Dqm%z%RXl($ +C>'%{"$r(A1'1HcH"rH]AcMΞYxoXl*&*#IQEJ:+󘢭dOLSb:L0ͅa\N2!F?dJW Follbz9fSu&gi}PYiuNuuӫ˳2lKB aM F5~zr4yӏ|K~dsz7ZM*Unnyu`ϙ{J؂w>R;'|d*bڶx?YqetbRBj:j lP턑 㶺UjqkŲ7ik~J4GhU ʧoLO-kg7hP+k^o_?ص/袮+:U#f:-u$P!6鶱kf׵_m2SWnpkfz<޴yx~lrˆ[6+m=}{M;_y~}Z[_!: aͲ61e]xߨM4|^Ovk.Jݎ /1{% ^Ƽ^&aghzE!ypMܱ96PmT1HPBnCnn5cUE-J!SEeuV1MAV6bֆ9`rp[W\X(B ,B-fiSH("6c`]j]JPvrMj,#:\st~+W-QpHsKLj0"* f/W5W[Ck6=?Yj(LF5F9l@ܨRZJou:#yrҁ;s0>' @))QAժ`3KMF7m62)ګRXC4a%+Og`Uq!e% ޗ8t~5g\Nzk="$*|RsDdHC@B+ڮZh;&k$ZP2Ge{+RWІ)Zjk+!(M?H_}q2ֺV  iqѯ722h1 sC̛Lj9zY*< 1^uk,Pr9(fL԰#n5B XP4nVCic#^$چLV V'b)Fx$"dR\ Q2E[ݢ" D}]N;R[]̀LŃ4h J)WzD qT{\?ftFZ$mĐhQ ')QDv71TB&L5A+ zjqA_F.$)$DYHL1@-}%ekf$P _CzLn.A!G6$bс?JԢ(rha|9!Xİ;WY{p5+4oHC?̡F(^2Ġz7ԣu5pFm1*4#^IFZVڨ750/C +m̥-S^tO<'.x-W6q^>ZÜH`2P2%Ͻ:'gi-u:.| fum 4Y%(1 AI^nqYmZq5O8oDNW-ɢ/{w3;b<фFN% jxfm_}#o#$Ԋ\x6kU:Bh~Z1=ZkxmTY^7l0OrnuVUmբ}}ߛ8s5@Hk9 [l+[6LiFՌU tr~fx@Q?&FW%IfqC[ߍ~IxTYkö5Z1yVq\Q^eķ3],磴K*yzܘ֖J,ZF5>}*Oh<|gu*Nk.;OGi|w߽տ~o{~߲/?%vW'"{y~OÇVyC^z5C1qEn` 8 [1s6$'Q@ޣ%T)%jYf-iU_bDс=^\MJZGJ2*Z8UribY.&tW}G+zӉ/2ln mX5y;ʃ$uػWS ҆p%Y[͆_MIjg:i8_rv]a4Zh[tcMLAc&2xA0$3,Ǔp(Y3TݛbT-s\l.#+F;԰~Bf5vO>Uu C8)(J`Ǝe-~@L d*hҩw S@ƈN(\20 B8HYE2@X뽉2:NcA($br8@i\x!/"Jɣy,) 2d3~ٌOٚ:whLrЂINl _BMhֱ bE&kzu S-^}9-e79}/d`̰BFe}P؞lQziKi45+DgK^K5Ե|J|튃=/ޜMZQ2 ]U6@FI) I1$#f/`A32b :'`K.n*>8dRx!r*1I8.=nRԅDiu,)l4p;(iWɛL96YT sUW֔nj\d;94\|LŒCE7Ԯɍmq#I0{cGahOŨKz#Hyg0%G L"5`W͈1h$y}3ZX#N-ŬOq`u0~Uz^]wp!  1*B.VI^BuqMͼb^b}򲪮~;AA uCߎbf?&uO]na[7h#NXWtHy7xar'ֲ41MظZʦV;n[B{p\l{DLKiMueP^iiيIGܮ5hy8)+zoC=6sTe[Nsj#:aD$`̙v|Eĥ!!lfgBL l"Hn,YֳPپ 8V"Ɵ;Nj︯34nZF㏏^}K==Y$ I*7@j*O NpaMRgǍ~{Fig_ݮM8`X[WX~7Od8]ܾAm_)m٩,S.Z '>@㴎Hq$ޞSسjB{y1.]QН-x:bV|*<? \ƨd :8)HAĐ,C'1NYy.f7*X3g?JL u/RTڲb_*ywn/Գ8SEV: )FLJ瀅Mh,jʨ@[A"_;a.0KbK^i#>sIh5W)4ࣩJ8DF\Y$ΉIVSJ*7d/ T\ >ڳf&϶$ϗA3k/L\a:'|aKc&o@RmPW֠"&FEwd7wկ P ~<-ivUz~> sDULZ3R||8TߟO^Io&ʚ@YsyOnA]*IeGwޏ]6ټmڀUWQ6מʧ_u6\n?,Նbrl= _Qeػv?Ys!ov"Ef' H5qX;L`s8z]1fO=b5UfFc^Ŭi }CN-s͖g}H]|)iB.?$"+:kѼd B:J2 o3X#g枢v4\N=hdCB,cUDexr ~Md&IF%00ă$k#c#x@? .VG~z@AA& ފ-M >Ry%t jֽ[͵e]2Ǻcu4w<}FENnm']E.YóyDڎ_Ha4ՠodʹ5XjgIŸyrZ}@.s9W(9׾YyHS@ B9'BjnyڀQRIF:FOGes\SZύ$!\.cP #gوFksN?"wvG1@'ӐG ~v(/h~FEr/~Ong'q20ػ?MP?,Χa*?LZ+"䃤^8yH%O\Z7?XmI*l{X <9.EUG "TxY?pOF25.J#$ lH<^{yf[ީ^w=#$Kyi94€P!W4,D3ˤ(zъ]_ACav?IF@Y6秧G҆jݖI>z,b/]|_jZcGM,XKgجd\r|6.m>E7]6[o5SDs&lP5~7ˡoQTj5JBu >{$#Қ ,RBryY\tH2BS.\R,U1H*TMR8ۑW)zZH3vBV i{O L3?xv{^$Z癟f;mPhr18bHj(O*@8ܢr>_<vC6l`;8˰,Kpb&&10̥ވWQG)q#Êy(]L;Y=ݘ35FW` his4$R(eS+=q`UoRYﴠ22C hE{bMKȤb~` GmJ[~6P~슈0"{Dܨi@|HiVGĒLxB/h1eꍦNMgqy \I{(nFblG&SuE"C,%(bwE: ˥=%pX ZR(ʐ/yAK3YJ;nxےݻXެ[w][7Npy?J%R! L&ȟ珺᯵pɃ0I O xCޥQ (};[.b8?ީàmb[.n+da><95 6JPm54rgs:%* eV<%V6Ƶ2Q4VHFAy7Ż9k !6deU]6vr}koxr1^UsZq-괖R_XE^M.iյq}k#dv2`3_\/C-q+TU-ynk#ZTiSqDM,ǒ!Kk~@)>3L$\Or,ppp"++sʃ#+p8 \ei%9tRj3+T*vDp$pWYに,WYqK pu/~j,OWweW]\ UW \ei}0pRl`p|q>|8I3̎< U|^bGǣE/8K8] k\g{(nt L9*kΚg>/l]50<;۝v._\8 j*A N( 0 _9__uMBvZ7Afh9oEV3> Uo}]5ۦk}GԔHRj\^~!19Y#2sF o@ipސd ϐ7pT#+y4pťp,pm#Il|UWJ6[qι@S$CR} Rl@F*W%=tHtP0tNs=+ `+t( @W-+p; ]!s~8]ZNWR@W-+#w>mBy[m:]\ƻ"誅te%w̧ ]\͆tBOWj#]9)m jGR_ тM+DDؐ|T=1ivUVSBJUB%Q,%:T8#B)o\LZТt(VZHWt$WG)׳<&*wnY$aܣvU)@1}?kT?6ΟJ/)jmRu+áĄu=^R\n V5I)[(u\Bӄ5%8MэzDWU\|+Dj$Jm+#BXtEp M+B 誅t5_~WU"BW6>$.yZ  k7tEhu+BidҕՠO+0rWWz*h:]J]6|<"yCWW{j' +QP ilm"T`d/?j轝̰Mn#7`2sӻcie r#6r(&1K-ؾvy;~rHgqXlǫ(%J[e/ݜviFEw&ѱdо{/N:.w4t]:OD [d#Sݟ^ Oy;GvF1휝Xe%ph@N:;ޗ:Ť3;w:7/]Sj%57͠;# 7|\9Up Wf{Jm!#3b;;.=ꮯVUz6Y.4I<%tƶ.j"uڙlfq.fМJs]tqIM)Jxc"xc ޘAo쇧|1q' Q?&O9^(oim]>#;Ѝ9;3dW֔M(0zS,&_V 7 aSU=wҕ98K~LY;?=xxiȋ\ݛTNK;<bxJ]4 {Mx](Tn*K28m^n'_VYpx9u3Ѐ{˻.$rnVv-!r~e1$%c~ˣ +YvtBϤZ8<fCwGwی+ެ;;񅾹Pq}zHa'u`kPk~-gipY8v p/ hpM1R4r) >] {oC{9ȧ}w2m2. zcMsfLjOF\#ֈmlA7cWmK.t(Yst-O00 0f3cOx q'.:w fW^c;nvi,ܝl:D.:ީO{FߘvџuHjV#@p.^M#@(vJ#p=TRaUK./wQ*U9mBM%MsӢ`:W$XI$͝TѬAG#rBɄÉIIJW>X\qcު%6J3Q$GAzܺo+ʡ46Dr-䤱]!`7tEp7tEh-k:]J]'']ep/tEhu҈@W-+e]`'+ LBW%:U J_-GtEkUftE( tB2(wEY7U+Bkj'6lj#]YvDIZx"Bj!]9SsOj ]Z+NWЕ,zcΑ~JQ`I5 C[ץPyW]@W;mpL)Yor ]ZɚNWX%{cF74 :W,a5C ܑ 9:9ImR , T&рs4](ԧh SL{YG~Ly7\ Vئ 2W|! qGt5s"NWRA]e#R D&BWV7޻"j!]iөO "FBWVqt"M;^,teU֧%z9xCWWxCW0ˡ ,g"]\|+BkD `qyJCU kV .l|0H(!l|>tU=Z R[ꡫrpM+USir(M6@B @WsgxDW J ]Z M+BٴBWB#|quxW{p認t%җJɥn}Rys<1hGҔҲMA PfM(|Ь?ݕ}b`y/se/t:f+.+GӾDrV HP6_Ɂpߕ"BWɦXЕZ]!`++Y# NWRw(Hj]YN^tE5.QZHW1#Bko oR%P2|JT1+?ASJ-?NWښB]Cٴ-]@W (P  ]\}+BdPj誅tE'Ml-.L)f_R §kIp7'uwEV<̟m?rI3})R$R8m/).T2h׾\m|iBkuiP:h4 JYSѱBWV|Zj]fDW9]!\++ÙQ+ù I6ҕɝ`T/@Pil6|uYVq߹\d2]DzP>G$=[D*qZ DG}_`ri v\aS[h&L8mtH緓 >ޛ4Al*/J;LGd gɇtO@6zB{IDԩ$ )!ȫlϋr]>I$%!'@b6[UMU_Ul|ijfDi(ӝewsl2A)6龋=)xC ǻ_j*.Kq.b _&4ߧ8G,7kdv(G5ht.٦Ny58ð;7/jmI-h?Urc&zqES=tn4yw>̦Ns)Mni.ƣϯ&jEcflrfutmt헩eFxd0i$b#Ae"hSɇїL]TA5a(:zC#A^.wr#%g|d3?OfpgU4WT%QeMv'WjFFq6 /NmR8\.'bHn ݹ'̷}J@&7 Fy2kO)ovOf'l24=Rپ(qsUU/ǝ&F;Td`bb'0 L"/W$Abmqa@r<1A:+Jͦw4N(yJK%Ifq1GI`?JsUa*:p]}uav;]$ιỦT26gҠe'`HϜLg+u!sl}I8/GTVeiPW9ޤe # zz>qjiƏGՐɯQC!:GVQpO+csٌSz®~'a3ٟ>l2Z3.} wL+ڟY:ǝ&bk[%j,">-i}_7%`Ja*AS?Qܒj%mF=v^ޒK2n$Rvp\/ .lB\{ ngC@-C*vB2.7s& X-+l63gη!NǛfr%!H:T}XrtE])+mI~m ʠAe9%ax=37GV#G*qjjwjVp͔6eۏMJ'm'uoz<6#W[⫿oJ.-vCh|@g?V,ծ$PϖvqI$NdV-LٿK9;rùvv'kSV<ꟗ?>d~C)r2Nhp&/"x=)<7 $,JR1Ţ"6|sItL'R2qRgn0rD_$͛cY٩U_v(>ݚ%c*{ggu^xu++^S5>; z?Jr[\U> H* ZI4N ;AOo;j0J@jA_rG0"Ԃ<k[! H"Ɇ3 Zd&VlrF+(ղ`Ụ(h 10eym,ǥ5*Z?b>'T']f" .Sl&]d6胅0>P6xsLK|/El|#V:v{Z=ͫ;^x7f8J$ʧwmH4[0 &/XYr$d~$[[o64wu,j>*bUBX1 V, JA% E]4]?L-czL0c(IOVP@q*5O̖Y:HB6QbqJ^[,JxA itD$pTSIjaGy \EbKcZX؀k/5uT1T6ƈtT_i9+ JG=H i7$tvn~` ,)\o6pqbg#dJc,dO$#g|(HѶ+vYTm۴Zox1nl;A\1:Yc3Ǡ8&Ɋe PP9ȋ V@,Af^c5] 5Ӕb`Fk.&w;ºT@R<: EʞA '`,B6g7L6aX$y4nQ,% D`Dž H(QO-ȓY3;xѺeVv[LgFM;#zlb0/C]N妓y[7}!>ӏNz`Q_ȴ4jM_AmY̅=BO BaJ*F U XYT7 H@$ ,aT0P0oE}_k'a)`p9ѓWy}}=&}y,Sіr{^=zM (YUqQT¸)ASTa" NP?Hގ`  @G)lQUsI<VolIi12cQ1J|43g=<|ٻZk>[]^^= 0CD H rGv¤\f!@ƽIw|%k\ovA.$,腋4P 4c|709IID$+dX2p&:xةPs0 /tJ"<eءSDz`¬8i4)==Z\J(H^JΤ^jJd4r~*@QtA@Gk}.Zܠˆ/^^7qU kjN܍kӬS/p^j?vMJsId0lͥj>h';F Luˉ!iբYm9sx$bOz2>Vjz=7K8[s Zg`ȰTӊ8{K6eE6g@l.ƭKc!{^S#h(deGJcDž S;E@Lz4guWHd.e#էj*aB`,/@CsFY) bu19p,O\v:X %A7wZ"RĊV`VGj(i(Q)DeHRw X笪{:b]iм}5cB{!,@QRaMA)UIyqUQĐ+Ql0"}T!@u{ԣpΪ*I|5un\^Ķֶ mM4QGsᅋqt|8rޔc|Kshk\h"o*K H1 ,aL1pw:N,JRmq'樐m ?FAsV?04_hs5)/%Y9b^ײ$${hv2sG8 rG=h Gq3(q#t_aɭ THJLU̗n=!Z(kHi*ኆn)<]cLGe("p,Ԛ!J/]Șrt 5X j֦O:~{1.0r. RHA|cH CBp^ Aަ'.v#uN[`̰X~eDEImFmBǦ y$8w :6ehhʞ4dw :E]-y4X܂!$m 1n ^Kh4M{7pK (BSh[  gu,:mZYʼn#s,⁤m!.u-$R#(V/vt6Hp~/ɦB8c3wmy8wՂa <X"8髗i DI(FNkXR~4c=,`8c=l+笎eQPr42$ȱʂJ,OM% O$" LjXĊ-]ȱE-:a]ȱt-3'0΂&~`D,ec\ j%Y!I/!tZc+6^@+bEU=eYʼns,Cmy$ \kZ\X"hOKey IY$)yQN1j 쬬IX})Kcf!ޒz V+N5H ZL]`0D1q 0mQ E+!\ָUB^SHf8)6]HY{=>XcL"@ܱƼEA 푻+$f#}1BkI~~ïo bP܎ gaJ+j>" |˄Qۢ9V H|]~( ')tgpC@xA4Qc[d Dždya7xxcU%eCP<'P^ڋy ʔu%}2I]dk3H|ec=#n 885aC L#''YSNO$Ua͚`,~jӵ1Wv[U&MM-T鱷^Օ\N䦈X*)nJu'~Rebߣ{er$eVSwSMw_<{?oχCsix<g6[UUaa\IO2۹az!0?[xy}d] mDPVu)ȩqNjBJC)ɒNh] ~b^/ N!5Y8VfcdC$^PKKbXb(߰ixw<^F'RW$GTMK(Fu/-5!D@q4d P`ɸ.9 ѳ>Xc&>dwfzxwB/óWv{q5u6)??w`8ةuߡ*|kzk>+zR]jO7sn*z(J^iOU;2d2ϫ7t)$=ߟH/!8!`vhԹtl+M\Z+eTeib櫪 w7zc2VKb!޷/W9HR9 IYM)U]@ PBd/݌>);HFa5Y u?gWzJLaunLJMa/.~ DYAni.Rq+Voϝ?2Ǜ qP,Q3=Z6X֯/Yλ#Ο< LA"hw܉aӑ 5͍>@,0_ > z * !e/QP-Ch,{Ӳ}=Uד/7AuYm״JY'\jۇ_ffKMfr4Y<'{V<㑺gttsb=:d|h~h6&֊_jaޫ1ꆠغ .Xܓ˙Y\F`/^I*3T?'GfV%(xƚv:syɎ}}R K[G|/;DvV}vwBl_:j 9BZJ)rz\bJ599ԛ-_ʢg1%6^Z7{0y[CͬY7r-յ$B +,MQ ,Q|LJ>ANQ/HtrgTz!T1I0}woP:!շS#؏{Bf%%Gڻu=9`ɠ<],怪UGt9L֓/7IQruf)DO9<iG0B@(?uJb?May(zS@y+"%:qđw3>;-Ŕy;q--+! IHbҔ>@Ȃ[hA#&$4 wj4{M b':ׁe2Fe-̭{@hv۰!ITz4*2T(l,Do7)B t>} 9t$DGyw ӞU?G3[\j ܺo'8E4KLt +i{nr^T6өHҿ $?m5c$8Sc'5>~sm`ǩ:eGg"Bhg-2(KrMywykop1i ѿ VK-9 az;y,g9HK8_C :}w O2֡"Oy**ٜ,i/ 4I"eHI鍣C+idꃌv"I_G DW=GRcz$>irYEA!-dt̟J{#վ5 upuk)2!b4 F0QG_2VRjrC"JEN!,KH[c0b :HY'Wr0[~ت;{gAM-a6NHVHIz*i X9n6!qŹA2# =770.ay"GDdN^t@b׉,a1@r`*A!ժ`KTгZwW[YSAdX_^W\e0Z7, $QAZkYi#Kz꡻@NiRt:M]2-zߞ5.s&N'ة HrTT4Xb -$=kCӆ XSo1nJ$άB?젿ՌZI h#S\ si@E!t<_|=#-̉wp1 l_XLHuKe H]~yG3}?]^U݇0 -4fh; ç0մ^I>UD rbQV3a*CbCZۉdt(ݡѦWb*CF9Q=fXT ZF]*9lsEw"zx{z%5% OC]WK#g,)2..-r܍*H> #IIѳ.Rz== #/&%Æk %M73u(*Ro=g|=%%EŒl󠜴cb{LHDƔ"fC}-(gf,.TP|&ͨFZK3Z*cG`?~ގ嬀 >sFڵ8ՔVF${qo`?#w=}T:S|CT,$`t:G^e+_c)!S.X1O)z t?QR]NK3z{!);/<:9䆳]tcqf={'|~==b6P^IWP#H{QZgm:kz]R:jhKtbđkzD0㒭tmN{E=k8̖XĆDzvP8YsJhI抮iC(P<#}O'JAyAPJ~Zؠ!R#Pm]Sݴ~nZ)i0}Q0U{0? Ty}(zBD _V8nM ӇUHp/TzM&g]2 l-ypD%>Y-FkO6 y,6V&wefWQXa$0C rs<3BQ_oͭ݌ɉ6ָ#sJ5j 'W$"ĥ(ӱ|WûS8X((>}.lZ|b*t 1f9gP$s8} ""ϤpBx &:n^O1IEv{m*fݦLi:Nz>nNrp8K¹aB `"`)o0-(s֨TmֹY Qۣi5'vCeܑ_JQ2?mQb2whŇA|QɲM }ߑ.8OP-;,P,- g8b,;jQ?O˞a~k=fmTh5s$ls^=Sz.K#k.ȒE,KM4Sl QN9`hVE1`b,!wT)WwMT0.V^!0|CV槙UKO3M)-)цѯ`2|%DT1MwuoetjmZGTBbHd:29ږ+ݙNSU:`zPkKG "P#ŌRW2<֋فj/w#;n5}Wku%cs}4q P<,Mk`S St%Rv(F#\)]e}ZU͔]un lY*lY_M[HҒ3.V/p 4GGͦ#)IQ<] %lunE +pZQU ~,CX&42yb.yYDhiW.AJYX/aSvz~aC $;r!zɴT|0{ָ,FeW^|=kTdv,/xY<}l'ypr2q hу6;:NnՁ"1R#D~hdGi"3't)Q<+Pz /Ȳ5.̉RTݼ"~6Pa4$4o@45o(ojߘ<6#;Q7Gn__?uTWދy=>F>*eGxϿ,Y2m\<K*HP.ݕskNeTx{kTzTIe"/t Ux}^JjB1R-LoX1 Tg{oTP:Y=8N8SQ ݀V{]M\|0rgSػD3l4s`i.]MU .QYU-_]߻Zrn-1g W+'g⦼D&o_ B<íyaָTĹxŭ}# Xbs3@.[9 F|^PO ,Mysӈ2]|sǞqcʈo,"m: ?~.)Dž(284TzǶg Sp^Ed״{("{W5tװ($i^=K=k\P1ޕ $TL8􊔨\ᯗ^=k\B婲 W> Sfq^&բ{M[E~SG#0(LjjYa|TI,QE@7*.9iгRG=7.vME {6љEr`Lzcq,Y+n@u2oŧճ؋ZLǕozUyoHX]ecq>Wܡj%tdX`U/ԏEyU(̊ţ.Gst@/WJҳƘs-Ckx,5Gj-E~"] ,z}\J둠<;:˧,ڳ%EEe>n5FgF "ְ~_ܼnoe3aiɅ%9^@eJ{ iB*˙T%bz[/8Q,õ+,^abD׏/AϤz|'ii^-9⻳UyD(ΰe1{jfqi8`&u(Xv.l}ڛ\2LZ`nw$ă.0R"s# WӍ=lCX#nKk&6sD*OmyO)>Lm'42FzhipBw:ib}C8HP'J xj^E$e+T`Q0*O*ס H^1.xɟq$p=Kk¶nƄ]! I""vI)cW琀^l˅vQ=\ІA;ɆηyF baz-o50#$B]aT2}|W* *8}yـ؂);"[uU$MAuF$IuZ(>–ܐSnWv?/:փBv`(+Eas :>[ U Izh'T[ "K@EI4}ϒ 25 ˸zѐPbW<JowX$ַFo #q_QϏ*K2n)XNB򨅅*ܔO/\\¬`$ ꂉ]yASL)8UQ `ͮ@iEb&R~]#ˌmtN ʀIijv5H?~U?܎Y)vS~~>~_h>@2M5)0)o?~c.AB.Ҧ*VbXJ~o9_7;sZ, F *L 1ʖ[UAPZٽJonU,%΢ZMdDl9\b2TE3GLRH/ {Ll^:Xj^@"izE{~gIE/] 5HZbk^l6B,- sV(a`tfԦHӜ/6ySD*[ )k[:$q ڃvR+6E}_-]bmUJEHe HSRmbYFFP?mC)[Cyes]1ytn,n6meK& hNEslGрU`TaK,!.r2卪%IY0ԲxTD&Άا0Y5NN,lc(&G1Y ^9C DԸ8^-v@̔Rm#dUimpP}ݢl~`/(7ŷSn}˷з…BEW sBb}7o3p ]ֲ~%S9w{rSN)5,(*CFk`K[)%pFJ\JaZ0E A)*X* 1AnNVW]K ĺ-CKZa VeCAx_i+j3,(taE GN Ao [(+ =Pd!^-"lÿf\j$1i:nϔZ p=@5p@Q1av>ocXӒ3 nAɨD7Nu%呥}cCb[gdjaeRD̑֩b 2tրhub5@=F2~*vc{>EDErpTK& )jvM}{?{Hn俊`r`9a`p8d\G!ٲk_QbwˣYά]*DqB n%:]iu)G~RZƳ9bNP/,VchQ_Jj¡:"/LoߕPU 庠#GՖGR&<}ejh|O$T]Uٓf{ULgg{Nb699>Z^[ Ա'ו, o1Ou%)bz/>f,g Coz+[a~zM6Yfrf $\N`Ӎ9si.vk3-ARRgY{61WG㧐*e&%N }u 9+XhufvPjp]ɴUn.QI;NV[cӠʔQInjRȱzug\s T?PTƊ]&Ú۽93c1`W @zgYi AE,rJb8bW0oVYZkܒ wEy6oGºeuJih?l½-&9hZKs2%긘-`qװ,o.<[XExs&ng2(0#Nc[EE/f2R1=l,5P*$6}0}|8\o 2ûFNnoZe+%۪$h}YI5\#[\͑ 5EyxKjg0*HG\/)p0p{ [2b2s?q8tgwq1W3]2ѳķJ9<>S*>as=*cIRb$`\˲($s+Ŧ[XPA=2 N.k2$$ݱy֢=|%r0^$7{J rnMvj:Z/ۇpEM_ IDBuՑ.ލda))U1Z C/!Tv2u9{mˣ._štXd;pXqř05^ NfGۉ*]hiNl2"56wL>ovaYhfqI9\6tX,J*Nߝ żw֒cU^2XcdC; e ׎F87jD>Eq!Ls Mt:_&,`<\Ͼ|^Ob)\ ?l6^*fa?\HYr~gGtl}1={vgZ7|v[.9EFy}Q^`\㶻 M+Ϥ54NaNԞ1uT%m+IaTٍas.cWO,뮖 c7ueR.>s[ {{<[-`EenI„v܌Ah9æ6=X&RʎҀ]0HM~b@Y޺N?-}hHwn^:"-ͥwu3H5r[|4pL{'I}kh|3 `"xGA҄t0~<| 3Z|XXO}cǰ/ͳ %}F0>OO.z~6iG]h0|cJWXoå_.ڬUO[_B.Z^D|_7gE9vm1vmx~r}< 5{ϔ, xDX 9MD5 lk\/ tu1)׋%}KbbC 3`0XL~ܿ4pW>^Ϯ!Yce I0_BW"/T e,ȋnY ;0#ՒXdvBs3t0i㫃Hؾs 2@.c 2@.).!FgPeD[`yK",Yɾz.,_=CcM+=|;#1x#1xYR[HSy ~'pOo&KY4O{BpZ`'= Ƽ޽;k%gXp.cej )<9h1s' ;+cqY%esԌ|>ΖEŁZ@r%H_wJn.&!cA"U7- ]f K#;` R`Қ,8גVSf8i5r?b~)q!6y,GW^glfgS.J~+^ŮM"&~f8`UO?beWhy. ~8~O&?t?5}gu!bmX4-ϔ),q0# R}Ɣ+ |yp$uX ^—;Y<91y־dSq-yvw~<m}@Z~`<>_bU}v&diI"oл9X۞n= g)g ~ϳdzٳ#%?.3?E@>ڙS`Kgݗ׏9⭎Rb[m?mE2?7qAſSMV:9K)Z2-'kuKjr1f<. ^p[΀I B0Ř 6bؒ~ Һ,RLA %9ZWE!dB @^W <%axYjD8 EUFKo-Ћ?jP\ͼ< ^]m8\\S-MA#d@, 2eay>/MP\OӲ"=?!!5zktФ=A:-)SC05@"_6_xVӋ ). =ݘ[^mp@ovI5Ox:l3O!MAAy ׺Fz֭ sHVܫZ<)+s=j66#/)X20pA`v? b Q۞%cQ*A!ȏh Md"q,]1hwm)鈤Ҷiʆޝg!w=Lt[xNlm~^It9c{uh)4k霧R=JMz"O o}L~cpbU{h\hr1)pQ&VȒ:Jf6#28X3U\fp=]Yg pkJ;O Ņ ȌURV$! q 4PXtN=t-:fIԼھ8Ϋw:{ҠAL"eK@z*GJL ]Pe4t(խV[%Nl^L<,^ eCSޅ9'N6ppzb=;쥕dieq5󶉲w]_\1R"mq|8ǘJjѫdvr857cQmCJ̹!yj6xXtEwEis}n5`R[p5E54^ކ)RFRJ(4ȑH j|JXELay'᜼cDdv7n"}~,_?T/ŷ_?))N4qWotJ3Ə&)=$jhrzM "v<{!]IQмwNl3r°hٖiX3-D ķ$*泉g-=+ ד4jt賖a8FRr5--9~IwNh#'-|:L#uq!mU/霴ߦ'T&}U#5>bm֗?6(g4#EdpKTCcMaQ_-lꉺ6|һF 3rjb:pVb^;5ckt}Ia\^L}x.>{*۴_!҈qr/LU cZD5  =QlƇ4^=0O$_(.7rM/{m>ۢ9?A+Y=$p ?&>Aw_BY=VOn5G^ 9n#$}-2Ek~C^~U/:Ml 3*Cq[3bXMAן4dLai֒d<(w nT"LPI?/":^M!1*?x?0,oOtvBpđ,4+Pbn T>-`0%yz oٺiʧV0O`=|Fg#X->`ӫpt`KI_&)_A5mDi,QES]ӻ gwR;PEg8)iU@RlpX_0ym>v3qó]^2?wȮ=q#Eꇩп<$ hsM4|LLgeb{%e" %->d 6ӺƁLol{5`RP xHvmn X=+$7o@}ʵv¬:.kkVMEsS(DRM<OK'A+nDlt[mQ('nK8D6umbQMdl?̳o7/a aŃUqe( pi%;pw]TˢlTYVQøv)}aab40Q54tIU"42D 4򳪐Г{t񻺊t̓Y5? .'E2+ SN_oNju#; 0KA\Kf@zWc` ֚zwW]; F[GQ]cq^Ǭn[LXk]SnZ@:3DmvKOqqN(fDR+UȌGRnV 2RDC.;!)_UgYQGWb- pNAa" ñ:^!c8>4}sÖuCo=['f7e l,fxC|cֵi/3z =fVA:ʕXw:/m`c:p`ƓX!Dx &"q ,fJLՑp6E~.VGjv  ?&1)2Z#M#űfhe 82 `V^&иwM|8 R?;x[<s1)l&y A 'HH#[m 5AQLεs$!: d kҖ'v`7@th7ieY;#v'4r0/O/os/8gȧp{@WI/|T,4e~ئ"+o$ijE%$4a~bij$8%/d/BFe0; aJw-pPt̐%h{ZYIYoq֎]ՒD1CIDA Rbͦ+GIQJ)<қx;v=&6ђߧRB ,W|l 0S)ݩ$tyd{o8=vu}mof/jTVe5W}KeNM= 5j]!f.TTVgZkN{=U8&Ho}p!ɜr8ĉ?X]G>_77Q9$8v;'u׶B4Оzo{"19 @q9(9}\݇}vmƺ?_:?_>(Kq@VFLD843 D268/"\O!o<#Qnh}`^TeB"[mpb^djs8jvI5KIHQ4qـ%,_= +\O&B+eKNRl)vCVw(>$tz]mWO=% ڝu )hTѫloj!J j`p !(E[Ծ-d1L,T d)8!hvV-h7ovZt򳢟1tXZ,+ZD5ͽ =Q|]y{>%prkĻg2㰡wa(&09w?iQ>eg$ nF[#GՏ0A7:Oˇ矈E_5{&o A9Tcʋ;vrی.˜g>6| K5 Çlq6CQ*|S06:fh0cx`Kf -v8rUvX#甫ѯFHiǹћ+\!c>L(;qjuB/Z4n˜zje_XAq[wVQgϫM=HPy W{e| 0qh^L2NgshBFōKQU]!tk-}fϾHůuI9̻5̼bIQ0r[kknaFO=98ݳ{:hz:柒Vבf0e2Is6IOVj&0~ѲQƠԞs8=4F-}X*, Miq˙-4ٲH]\2[NEg-]dwP]zrLF6KRa#zC`pE4; fa >'0ĎXor*߳JѻxFTUŦ۠ =pKf>uD6m \Mk\-+qӾfF`}vuݷ qb؞$ɳ.(GT=%eR#j@0gBJOxxy+hm2\sZ=3TW4S]a$QG+B^rH!VE-rZy;FqF=z.:d>l@2z d0mi5(젆_ r2̦u#;mBudA%Iy:ro`Ēi#> .gWhBF#;)AZFjb&5ZT83 A6$¬M:ٔ&K3$[ :RGa$|,k FD1ІBVnu@wkaC@g> 0+ ej { /_pT##ga/W0 /f+=E ! 3L)o+) "B B:ֹ=f$5q~ތlbL%TʭT֢}{^(n'(e>.W*j\+dXU+ȰFmojߦBF/:8"Ūf>16 .0fB`= F4!i ,JH^ǥbpdV>.豲tİQȮ4F|*d #:&&dFp?:wGm9鶐Hy0E{Tt@=#iLq`~kÎvhvyshqZsX%5WZ@5sty &<42ihƉ>:tmk-p[wТ)XC?ZOd[,JۘЦ"ix*a 'H g6&15i))u ?7ޕq$BewM15c<F"%mvK4.2UţYhVTFdFE$|8eWO:v p1_ j X*ogy>]V0Z2_V=5*3pcE}:\,C}GmcxPAҟr]Pɟ#p2X'=hźRcT#/](A냲֗rL6'DR.GET^(+O`/ C yU"d]!G132$!q]9H; Y!XC<ꪡOHEhJ5ggMfun0@3A $LEnBo\i[nj 4x娝 k^([h)v )nJ,yt Bڑ Ū\ڃw+݆yN^ml|?pTy@Q;NY=PS)Q š.p銰B _P/]1sCk[[FZ$yEE2m-Qѝʹ)- ~r?tEVj!cZ]jl`BNI0_9=2BUz13 CԋG uPZQY b1ilT8eOY+ş[1xl- L'Ww1XC7U&HXM_aCXe2e:զdtN\~>煍ϛ Q㽴MpN|gW[2z?+cK4S:ctcT5lڊ 3aMmZd}e2?%HЂzl ~ ?bzxl =x}zOe=.gmV|iF7Y:̦|C4(=^=̢90SUcwq*^MFwr!i&hS8#A'A#J9۩@a,jA4Y{h#(SC목s`HP;UQGX׃ʥ h4G%=e YnNQq+}6rueZg ռDk/G\qk̑N"V_VHEq6xzWPth^I`Onb_ѸUZ#V@ϒ|}yUTItv6:áY!o4"A2b{)'d?Z=e[/xLJOt<\hr#P)/)eN+4Uh`Uђ'S ;7g^=DÄ8Svk?NO^xhr7tYKlƒ>z^FgT KHQc4 rߡJxyqR{iZt(ȟ0n%-,zG}S/~ՙ{= ~}Lw9Q^= /6y Ѱ`F *嘼`s+`e8K#yHpH U+ ȷt؜Eb,x$sk5opg_CFgmLfjBp!Fh jv%o(qwFQQY+VM-Lw>uc0XS]j1rC g^nef,CQ}'WN[ZGOߛہ@jߡПS09I9@M$QG"YP(/QBaN6 Y3Pir/r-$z:nx̨R H d#P\)y#(4ZTr~k`i@њz#H%փ#N-7@%b-Bbs~[0m-TdH<7KUXmߢm6I?KC&燼t%-4^Y% \0`(#' ZT*72J˙t€Mmn'_N:\OU[hD'F-Qc".:YT#إ$csyX˦HW+iä֏Nq+( ūZ_omQΞa 킎[Kēehmx6a$^8Fp) __FLMA!0w>>FČkx{ }15cyZhx8]2Zhb%~[.GEy ߬xLEF/k|39|R䴹1iNU0P>RhDTbB`>}p\$G3%"DQ'Utƍp]ע,gC],&w ZIvW~n-4^RkRk=c 8j^8f+<%So㊆ 3c7|')}ұa&zf+FIm;%}T:!UMlǁO_ܻk^]˨(9@8,NR<ҡVj] 7r.>eی1x5k ^'{]^CRyzzLX;4GGu KZ,drUhEvY/B1-Vdv!{8~q VdK?;XL)z]T'|;:-6sc vY.}eAPuY˜%&J4KL_^5ZRa%͓~"O|{v?JE$.+MIɼ?=0~붶\h@o4fBv1feSY`9Z jb هRie J%1K@P)I#j';续@\J[O[n=HM ~?yJ@(.ei7vKB%"XH3RQ%8 Qu^>jh;<+_ ~.e~GŪAۺ4G/U. nwux%b_{=~%G̓e(  ,*xr[]6 e) Hl@ jԟ20Aq ҧ2$1 @sVH hH9K{%UIz?پCpS(x0{SJ9: QVk+x ëPNeɽ\'4#ܩAs[}vEKarNj6[>| n͑kgYw~0t.S"Kϣ;atʡ MmVQAfmAH1H@]q9,PcF[< CǗQ2sA ꩡ LFXxd\ŌfaZüγd%d*$GnAʂR-:soK{0:\_=ejdMq_%(,(.ps*W 0]r5R/PYHʹQ7 fLRzd$OUڤ&HtBYnr|qw , ??sqlGeL$vMU1pS?zHp*eNwGweM` e& %zHs9)U.kԁe씴q`ԄJ K|j<gr`xtNf<uVy='O#M U<4 q[Kȏz +6Ihl2ۥhD-eZJugEK :I)r/yr2҃"8p/X-zDc^H2RS&  s 4rPYrar>vi% 7hG) ,׌ba}XX0G r"u#3 b5YE8D#R@\,eUkJ<R/ k%~4UDI>YPY'AC,[ۧʱ&h6K^D#}fIݒY\N"@sr=ϐUkE٪PGLq¡%")N";8JEr&=<6SkWaqJʙz!m7?][oǒ+_}Ş 6Y 02m˖eE/o&9#SHU_UץP0k!kE\9[Ze$Q{7ۇp=}!G;FQy-uw]l!KyIj4yKC!7ݸYtlz9WW0VZz.X%Na?)hݐs)a? *kz"fn4 be2Zz7V gYݦVB(xO"EF^C66 k\^lbq&b( I3M$ցŶv-su5?wV™P`ʎg?w(GU~vߖv~>}<#ZTh KJPPBŀ0szo昀 IOԀJvF}Ttͽ _b9:17K`g)tuOwac}*\K%2^:cېȳ//>pQL{K󈻻JDKqwߺA&"RhoouDs~s Ȼox؟`uHLET\MF/g5jJL!ƀl;Td C)VV We[[3d*NzM6 cvJY_rނXa蟧ye΀-b\B!T'Y ʒӾ:E'`O #:Ln{ Q`C)GB+bNB&9'd7!Uf&d#;FV)!;Wб*L Ե &hVYe=`Ǻ/gZ[ @r? kT2:S&SJXB RK?At) #f, !ʯ&Z{s'[6=G%$5AH5ɡ$#T%\? Ȝ֌!+L$LK!e-{y(@fLCp{aJmd G[okKWd?BwMte45륩Y/MziY/p`Z:G!jx. ZHP[SDb @IhF*Q\Q¨\Q4Fg&9^x׳.ӟoCz?«0]AgsHE:g85g$o.ݟc_Z&+٘ňxn>ޘڈQџ|1>‡+ۍ`]팹~tq!ʫnϔ:"}Vsۿ8u6" Uu@֬朒hyS `uJMi`AB!`3#fMY1'Tm­.Y$Z1jqDYMh(ԾH q5}>^1Wpg7Em4S!` o}+ɡV9`WP(zFCQ;D:q0y=*_m(ֶJZLIrT bzsIڶIkd-XENI>8kGҹC+ٹ 2EBէB5}CQ 6[/t|5Z 9UI"M"C*ڔ}iEҾ *(jckʱS= `tNua*M* ""1AFQ˪ؙ\j m&2CMoXPwusOFFwl#NC3{mΈݰ*# -c+jY 2,Wc@/XIʓtA%2BT<[a y[ ̵&zpZ P'-=ښ@t/^~HkDOS&P{*>麳ÊX:r 8/m3윯Ň^拹,/ ^#܏__ ά~8O1T4n.(^p]ΏB9^Sr^ cQR={F[͛77嚾˛Xl6t8. ;K;3؜Ï$4kHf ӟj?<^&ؙD-h'5ol,o?Zs֬ȍl>t%#,:0E0xGbkF 2P©d~}Bme 5Gi\բd?'Z/>"!`H!d5ҟ7`Ke*4,HcmUޓqPI R׍{2?ZST_ԤbWApT99(R1wZg{4 `{.ƜI[ST[WM3.x&)n{ػI/41;]˻cm'aǯo(ٟɉ 7a#޶y{߆Tof<~??s fxx0䞽i 21E~wksw y1dSV)&G+#ٛ]=%W 01c"=Ȯꯒ>~ۿC۶WIOsu.'7m/q" C}y+#oذ>e| Cefw> Bh0R\/ Weu}w<-ƜI%k-LYrOɁnh;<6B<ؠ Ǣk ƨ0I9s@ϕ胝YҜGHp)-b^t51j)0 -Ƞ7DR%20=JFkyUHǣk1}Ҕ)D9 o>2怀&J_FW^|Fx'\Ȭ-869(ւ6@ދZT׼ʂdqPhÇhJpA19Ï8(ZSO7͖Uj&DJB]|:!Duy/D( fԼ2ek]N *7S"/{3yP &E &OMvnbH#[筭 6.Ɯlv(9H vf<@ӣPzق:vP22k+jocNƐP c"_坹kAg:ٟ@(] jm{JwvV-҄fB}w;ZE稑Z,X-Ɯ L3@HpsH)Lgծ_Mkrn4Gzr-j ݁#? NnAf~ 篵47Pg61'cneOܰF`Z-Y[)\j_-l՗<ۯ`Go׈I|RƱd~U;8 ԾøírE|SVK2-jȆ}r`B,ڂfyϺAfU(]KT!W/@KkxMϒ^]I:^w>qфMR֖s<6EŒ&D/GϚ(ﶮ͐C`4H-wQ_)BkZDӀVF[Lض߆J`#剜ȘsrJ4s@[R X!!6lFє]wrq׍T(߯c> 6e}mAv]͘9VEI⽅ڂ2uޔIs2Nop隡0BN{J< 0Xf4wsS DDAudp8U"%SÀ^MHqZn#3:S6.2Um̞}۠:/(R#9lޥ{.vC[#&tFQ$6ŗTm"ZRY~!:H/@Qa1 u@X5.jQ9ar+,H".M.(lLJ)jMi PGmi4hKE_G_yRBˠqxc4+rk<64cTW1ڲ *e I51@@6fhY'HsJj6Y 4 o,lʔĢPu8b@ϣ3ϖ)(*9 M5}4ubm(\Tҭ}mωld斬Ȫ]Y$ÂlVQd7Ha;eiV3 a4 ",mK[#I`PQ fu%K:0[ˉAcp Hn1`aQ F#T|a:C~K:brV{MAc:DfZWTc [joDT+AQ#Jؾq6gnn'_cYCA.c9\q 7emc{^A3CV+pY KT@pYa'߂14===4֙)poɍg P{-_ap6&Yי^j֟(rφJт6WV[N B+t?ݼфCZki#GE POz^`੕Fь8ߝh"d55|`4KUD×dJ/[g$f^c06\߅_ fhhoʇ! M:KM6,!+5"#p9^4xEG(+}z70v e]7g5BR q_|}kf>iqtu=v.x7Q=c86bx-ll4ͬu|͚{Oܾ^Ֆrj̣tsl+L&4(Sܷ_Bf!Er▷?F5q0Ğswlsge|jf+~ _Ǐn}@dchg^۲%0fv0J:Ϲs:>s #'DO7(yЫKR |L^[KQb@-]%;n^{lvng[kk3|0̪i%g`CwL_KAfdwa[఩@f[N J'(L*W|:(Nf(eRV٣6 T<};Gz>'B͘/TUDҽ7m%}GkքI8UOJ9f[An}~?[;Rwγ?7Idykgps[p}#9kh->jgIq̩ @֎ڤ[~{GuC| 0)!:z:A(ENZ};P$JU #vsBFZp"]}zc6ևߚ;+ Φ\("S d4]kNO5qrn{6P730`ы=­NAG/0K'NMj]^V#Z(1"dvOgj2;evp$dv(G=Kó]O;<2';mi9T˘$#`RIqRBl:Hf̉IE/\d8ݤv1CvtvozNlq0ݤv@8=v696N*jR `ڍ7Mj7 !y'{)ǧD" 2h3;57{0;"pqc/9Jaq5sn!3✧T9ur,IA|#_8-b1<[)r6ss>gj}cUPrYěnҴ&;Qa绐GNk]阷:ʴoV4]u-O+"|Nj4IPb~.t]-Uќ~^PUs]Uz`הr#i:%F~8⛕=A^wPu Ԛy}2~YޤB//6z>z7=#o~O'"7-6q^4M7clթ!BtivmS$IЃ côzBQqē,}>((^>F[o)^m}?>F.w9}0?}qErƮ(/۫?_ DUlӖlESJ[ 6bVb;Й4Ӗ>QOte>Qkh^4zKh?ѧvyȈ\lAt\"b8.DhXhpSIhzkx]`o NlxX隃[v/1.8ҷmc?s}#2ΦqdRr]8 OO,Džf"va4YCMg,% 2S"ʝE-پ" XLlDQ롗`F 'ٶzƛ:+Cl{<^rZaCzo C_%vۯ_n{0=~/xh_riAu"ԏAIa5XZ<Z*kT(RHȃq1fhw۲0''iG{Q tYZ9n3ZܻnՄ'=h.%%S{dr$) H8)$kz%пu"#*y^N&r)8 2b᳂T(noShwgdDZmA"{mK@vOl:L_% [K_Sd4 s giE#`0LEugxz>q.Nb~%+-23άh~NI9jK^6kS(-pOǑf3$-eNqηFdwI` (TݔMOMǒO/$+rNJ&fu3Y&QEpiLLrkf=S,E'=۠l&Ф%ԮXx:ߒFQ`/ ,;HkɳHXd2.v醞?>ԖFڌژd9Cơ RbJ8P &dfnлժCdCQHr_6 Z]$,5{?xfy*b^/-2/-(F[hJlMNzwkNz|ќ\#yZ@ԔGo*fkk${7ts}˟q˜/~C^>bqaQoi59N<ʥROH9BmݸGܽ>h5ؠc,0w2~GA6bRΞ_Zjrh@?Ov<Ǚ~~;am\96X g qǷ`vl੝MOډ8^.&<d62=vMjwԤ< Gho=ݤv8vځ'S.Z1Oɬ<Ч̞NOE]သjU#~dߍt ~{U&XqnzeC25|Ok 'y CS2E%M+ }tSz;E]G)Xw%1uVsހ31 70_ ~!].}V:GRȡ:n3UZ)Ij F{"őRKAkT%.5Q+-q"BhEc&VElSg@u&"Sg Z Jk6OvXC/NGabbD-!}r.!PRndYݝu7^to߽pj>>738/#?xw3pɴOQ/gg 1| g^o4GPUmN(?GUZQ VpΧAJlِVoaVpc;WgE\v#^VBڕ r7+/ɕ{7XE16")Af4æBRMb'ԐK7׮[^*(T&2̚j>BRl%펥o&ue4]ٻ8vW HHI`.0{8IW|ǀІ>aU0-+'`HXsE )δYMR ZU6ܦ9ƬtPk)S"z&VEiJڤ9m_Bg3tRd m (7bPɊ@*Vc\BR>Y'A DxecyH,:jAqUiQ6 -sJO:iZA_j18Ρ1?ν6Dn7yg"rl(B.!΃7 F #*T]v$4EHTe FQ YBo]FS+0i.ayنN>S9ZET%H1-JJ$~Qk-u5,’Q%?2? V4PO  g3U\%v"c`fɹTuebP\/rXшxcчȑ׆̤DKKL G"P H VA XZLP,kn<;$:W5W&3 x6T%b` zHMeT7V\G̩=&[JQ{ђ"'-,/Bd&8ͤ1 P!䪩-%F椢KE$`Vp+TE\ޯFibpWiE4QT`ΉP`sݨΠh6-d0&D S8S:AvNS#I6qAlwɠ,_0+r!+((6 \E6Ҋ8e=x.ɘ?x[F+~額Cw;&Z05*:wMlj[ixHYŏ9JBۯ(߃W Ts,iߖQ6g͠A|F A|O, F>¬AUEyxTlB*5&AFD z7-7c(%g'~gb/:NDS_C;li֚E 3{cj] <ʴ.nL_6Ц1<)FƱQO6,9y{=N.r@0~zn(_VzyKY oy1ĔjS@Ac~kiD;ID҄JPY<5CʺtbQ  ;BzeϺ\j{ydث7GW{pv~m|CkBz7aÅv4oN;p1\SjwYSjuRpz$4}z9roT{žU! mGZ *bSP{p`Pq)6y#CL; ގz XTz4r8Т7⬶(szgks"<g\; U kSV\twƻ3>ߴvء3vF:CYn+'umlvڭV}XhI9jCCv~cC' xC`#!RQ?V\idCBU-whwWͱPe4c]/nJZzd+ ^yJQ֨G ֫G9< ,9A}ЮC)j|SKh4pvɧ_tpT R:F ,!R|+dS0j+@̮`ΨsVb& 253d1q6DAeYh[sotzjӕ xCHD l4mCTs ɡ+, M#ʖҺkšrXnrV/Ξ;/5Cf3][ g[Uk^Ha{ *}Qk-%ϼuh9j t+%cIvJ.o~Myw֍!ԯT پG192f{C=FrʺԄdSVkJ{g#CMFzizC=w՜=VBV &zzgZC;aCγ3) 1uhZѸ~,kwrZ/o/k9ndE;nZYS;ccű촬)g :[7~[n~[<귵okYc97O;Hwt*P< `QZcw{4`=z5N>$dFӔ#0 t;G3Θ;a[ׁ]%'ker?vHv`ׁݴ~qim ءXPcfqO_SͫpviK^h맓/߽>~g~ΎZJ}mJSvEo׍·]f#}hoSvCvGqsymA ޾+.ѸتzAߗ/<;*)i08ik_T[xf_S~Ѡǝ{yrYwu[>˞ gЃ٪YHe^D&>Q{;MvY=ﮇo=HHsk-Tcu +Qv]#-Tʢ{Q G '56~*>7ϗzḍ,oW[Sc,;_|ìem4zQ4D-܁BgtR;~d  y`O%LAlz .6uBIelzԵ^c:hR(j%JUhmUB\Vc6I{*9z#"fN7VյLRe"rIth`qv'@ߩCH7~"v_C~)i.[z癲j(B^VL6D}WQ}x]Le'6J6$H|=a SHI{)bNͫ OO+yooCi=nu"[>,]auIgzuMu5 /00WrE7=]`U;ůgb8a#R@ h3^ l7bV'OE?/{1uJlj,]2^R)pܨY#uzRh͈1QF}ϸEk[Lmw s~AfbMkK ިW 46'~@8ymս y`pe^ᎵX8mHhT1F?XhT7v;>r32IigF@#)wTjo<&e]%/6fX+q8ߟmGQG XSA 7?~<-+0ȏE:o[V\ |B(AyrI`ixSSHT%t:UnRjFd@LOm5_7,@?mdD:~͓i)|/g.C$~l{޶qe{wD ۦ{=-Ž ЃJqligHʶJ$NF@2EgÙp Tؗ &R)RPecC*E8H0J0 <dn !=R~.E5c(h <[HʳrHm" _l[v~5F%K6O-BJ0>+(S(=WF@XjA~ρbLyϜ\uϝ iz}::咵`wLQü`,5Rx9-Tꚠ 2!l&q 9t=K)1&ݛ'a-@/OZNi#7C IzJq:fL+tMjt8[FFFTf4'{s Prt l"0XJ! 0Ij0:5qY9IKJ;s4,B ֌c#*C2d-NHDhs{N`@X cOHSyIl3<UJأJX?];m]6'Ir9XPƗt4|BI_Vgϝ˧+`=tUPe./FFFAd!VTDe/Y UEWV/XpZƍ[qKo-ܒr AU2O B cߘEx遒VW1%9&*:/A`6G[` YɌw*TQ4% cWE)E*59[!\KhT[D\1TjUUtj(2PC4`r*VNQ",V)U\Z)fhTqHD"MTdEe1i’HQ , zK PGHdht%h&#(ft'kj+LQ8Kvx<\yvf lvf *+\I:AAJ#R%)2L&ާ[(+eJFKǗ9cT-ǵI99EFY~8\i5A##bOdqޚtdC6=V֯&n؛}*`2#4dJ])'e+CRCEDRKJH4&85Iؙ\bM#4ޟ3|)%4jc{-j(q$4b$4RUـf4  zI03I ƈ0 R8`2A8ౌ )̋$\IMlBr$Hб-Gh%#5̖V#hd{4ᙠMM|L[V]L`a#yQC,лhM-^mI6V{mn.qCΫ2j޹o@m2`rRzY E {q?~siw׽AZ-tY5[UgOf 㛳&Ϸt<_G}Қy}8IGǘ8Ax 9y1]h.[~bi<e\4r܆_qZu_^_r;eo}} =`FL5#`9_3.9ń@3OZ/t{&])u TY!>Ywy~2tdڜ=-Xw_!:,0:nyY'D5f?t*L͡CJ.6g",,5l+Caw6nV'8wUaa_"h ;lyǿAw[7#]CUS_0E!tk;wvr?^I+ad*ܸ vֻph5ۇ&Q׈!n^w&&qbᄵ;vUFLJJ) 1DEq&aPʸH@-QZar$V7žcDI[.#_v\_ځваmKG׸>Ĵam11Y _,WX%;GGYLrZO+|OA/O+ʶ6b=ǃKxNkpy$&c_FQ7ILLeC`jQ@y_b&"Lğ}-x~?ElX[y^HOxz*yt^Cy9+uN,s/y;7_+pVz<~.s+7cavz7j *ķ\7/̡EG9kȾ)q|朧]R`s9" `&`8Hbw\\'pXz]{6ƝJ u< 1pZYz^(j nfoGYt64(5}t (G!-: !G.7.՘\3/ +1C"Jt6>sR'xiTs7)95tCmbŶU<..8+T})X˩KJ@;YAA-g,9ֲ9@E4ӯA8zf|xάzά;GAqb`wFI 4-4D%&((5FD fN{W q}j, 1{c]Xgtqg:) N7gkͥf;(`8LǺ '/*4:kPc 6I!-/? #ϭ)69gׄM}^ ij}GP_c΃u>~q@}aNk*BuĆێ"[$S} 9_r؆\M-XYj8zh)"2ׁ]B] g }usSPI\\F\Jsn0q:\y.o=Vd_18p呇|0f64.k 0A>H+ @Ww/A\jguQWA$-u$)AP{omkhǽk b4hŷ|%fN p 0,9<5?zh"mM c͍oiulk=IRj.\ntDٸl)Ҭ"Rk"Ւs!/R L ()ګTδH\ڣTpsH\ZjT%vrlݘ(\7u>Nbz4W&DE9+5`0FnF@?_H7pWR]uLWۭ[ Fޅ:'NRqF LoPQ@,' 4%Pj=4(iv96 B|#R/-SHu("ClWe3_늷Rz)tRVJ |I) $!HCRw%|COΓ4+[י,HaʧHq`n)QJOaģkn2oa$$RHx,UlיvRXO*kNaոp4^\ >= % e^\uZ60gWjOԀPy[au$k@l|l]fdqޟ`D ZDuRc Q׈꛷=N)w%K:1z%,4rA%§+J~4T:5a 4(Dr pOD?8\QI,t<˼[k\A~%;ï )_*CBbMPq]DZPfEv^-=*u]Nk]j̾DY L*׳;WSk{d|) )3m2B֍6Z[o5+eCFص_=kE1Lw͝+GkYrս𷱠o] to9+{`̢'i^BGnV{T͗H"m2n7=Ve4)ƣq-?5KLP^Y%:h  &1 ([wB0Rh+ez_y{O IQnXQF,a8 ٧8!H ' *3&g5=|@\=\T,E5c螉eD@o`+xӞ a x)X A;mՉ@߲ez#_, r vN^ &W*/9?ŹH=sDcmK;=ͯdUօO `4~xPp(x$ I1iq7jv4w s+DySK?iHΎTRч,z.I?Yt[(ʠ u'+zVgΥ0Bh.̪;t/($D"իȓezhcxs8D>c tRNIGTX}}l~4مܟ!*ΆFQl01떲]d_]LnїI!/sp9{ч+8yQ`b `yq4âRRƑ?sJI .JBZ "A{Kڈw@ suL2 =S'mi|O83/%vţ+4{Um}^p05֢ƮP%e-ԉQBBiﹲ5dM:t%M`s2f5qQقJ)*i40%)=Tx49:VovH9m,"k\DC#g@ >T2<1@i)ʼnET}PKEwI2鉾p15ΰo`io[ k?hye,d ˨dݠ}H}t,;wH.=#rB@08c5-Q7Əi4S+)rZgBmA*DXt.ܶ vڸx5TV';7ᮙjfWiFįO_+*By\[|xrѹHp}o*TCh3E+?$Bٞשe33 j;~̘ΧH\~tLm&<{pCBVJ y1Ǿ˿ T$Dp} VFp/j,Ӣ4&(bRpϹ}e(MYggei+0+s`kHXpS+KFm+r @EhdВ1D"Qq rCm^t].R 'y_RSb+QJd?oG@(-0d=J.c̈d0gdG)l,5FpN6s cl1(W P>f35 \mj"jWaI 8'g݇QX?0,snfq.ɒe&!a}CηI"X]92k CຓMhg\?_' n/u9L:zb@*'/~q9ex}MaXւ[?<_3)pf:S#] ܟŞyTjݡ>65g>V"E\o5g"wfY[ +I ;Q:p\頶&miܣ/;r0 @q& #EQgvjARU քkK]6Kan@pj,TYoJ(E  sl{,3F*\Vʯ,2m̡:)JTS)UPQ2HQ4k U9:0 7iR!j^ F/ ʍB$ Lh0 ]([޸dFMkC~/)p+X&AB q:νQG%o{ >xf* ƨ(]ЎK)m= 2}AE yCQsPη-fbz M5ToL7f1#zMq:0x?7[27Dm) ySu ̣)y9|{wŇ'+}߃=||+d:GaSL2g3iBVR:DiI*'L,݁0uzHyF^w%&E~`l4ʆETh  $]ssN=^XG>OVQk57yX2 f_hc3 ł'dq}z}ZSV*٥/hp@ߡ3t$R sC-5w (Q=V&y"eR`ِd#ƨR`)Or1NklvDYLA ˤN)V_O8tsq7V<lT̺>2S i"㻨~@b (!F/272~\%>Dd\eO Ѡ"n=:U?2d^vv"[<"Pl(-Փބ.*U'KՓ~DgbvƠ3e9P=PBt;&P/ΡM2lAL6L&vf(v8q?݄K>'Ǎ6RDoq4.K)[LOe̙C)D4Й;+VNgqEKJ!RqB˩xq5f݅qRYW>Q &GՋ11F8qW tIS՛u.,Uog2Ah'/7Giomh UL⮼j/]9[YuB%+Vp*d!v8*`r%dD皶>rI\Jy#P0F6Ts@0y+{βPDˍX",x'mOmR׎4/>|u1i璺$FB|y8L 'y}(%ȓ2 &tc36/ 0$pc9ㆯtԁۖ+%3'X"Fo[Džyk  e8J ŇkEKGYn/?/0,u4=i .'޷Zn;6c/rm~ot ޻_d5+J y*`gڱ se$8GZ{<gAx=kǵ=_.8oO״yT48є`Њi-i,lY pʻ63:=ۺ5Awz< UԎ=$D ]5G_^ 2'Q/b +brBQo%0@SH,6 t"cRyʠe Z\lv,DM!9+C}@hY$:Lbs&u &E+)a5 XgdhQ\\ݖiM0GӾBG}vlߑpae,kPzy=2y}T{|DdC0Tk2 :,񫿟E2Ɨ)!*dHY3r#FV7I^,~]2J MbnG^`4QSd#yd[ $MےI)j"an5ok"p599,ٽXjWs91v4 IA=h a*r&!жN[7]j(9=$DA ƠZ9Qg$G7%2%RڽK:aI(ykFZj9כjWsj }|4Y) iZfHLFF״Du3 {~8g ٹ@Ay5/PvI6o֓tOޕ-m$_AT"(PUPSb.-i[pd Bt):nKUZ22OZgihG'3iPc۱ i+^I/G^zO&Gx*9"z$6$$02@*+"H&t >Lt63)#{:ׁL{^{W͆ !`Jri'jLPȪNPSfcO:>7o!YWX )joV+<3dkDݮȂ]} c8+]b,[)\`}D ׂؽM%$+3<[wYHaAJVFb /d $,( qVV) z/zK¶k2H9\ϢxRnTh&>~ixA- ;|5/uSU@Y SAX@2,*Y,]Q9k Wp|:Bws 0*!|;tbC zH|AƁӅG>`*2"㡨PuFXs,M536BHQT,^Y= Q?zqn>Ԥ߼ܹØUBw Lg%,o(;@ ӟgOa4s0n3^J׬kྶ9XLR;;.rMrwG{WWMgfBHjE.hQrVοOcV| 1+)#ǬR?:Tt@ں@7J_=)r.QSo@*1v샵iQI4=/bdn7Wm`G6WA. >5GDK+Hu)?T=0؉S1Gnyor]3Υ 8.n[Rݵsc=mZYqKU2~j4}`fO{O/hWVayӘU▿,~kԊ""uϏiq ;XUvT+|Ena' udNӍ`d;682Xn](+ =K|^%;;-|15x1,WV՛v>X"޷z>\&oozm`&f8b]bǮ $[̜.8 ~UE(:܈7 5Mɵ6֩LW{E4`It)Ԏc%v@A|,Zv1gE4cA)HNɻX/FY/P.1~۝gnt]k8lbIT|5cܧ Bs# GH h0ۯk;%Ope௛ /SKK=ptY1kU3us#d36u+D]K3ĖTYߦ}bTSAIgR/xm3n?u^)K:H3AX@Z1ċmA` j=jVZP[ a?l-A98i-BxڥvlN) jW-ap)G$Sʶr"Pwz9uZH[;eHTv}aG)F.$H),LʄRJ$ZHwPI ;H&+]/Gˊs;YRfh{Ws u6ȡ y&}h4ŗo)I ~W{eyeEzxLR)Y![o~cTPG}" YLZbY+[`hGJ [DV3uA.8K`'(5H R\K*,; n}ѸYêKx!*((#_b!@RB6!Ewa p"sP_A)nqLOKum)|9evlk\B&66˝WզR ?!^\0B.GDr.u#ȸOl(a3U7SEJXJ@|]FrN:s Hv ՝"E28aXE( "5SHq kFcM" ͍C?H?$.c%K͵]J!,쾶a { a mR]a5X3[jjHs]Jm k4Z%,mD] ~psD3]]I8JHDL11*G,T$X$ 1'v'XrDEiā* ŨeFS( p&(c! /Uft o&Ƈτ@b)ZBr>9WٓDR Rݓx[ۢmѻ mQoz[ (PDatbFX]/+"%Y)ssqA$YI-24w%Ys#ⱺ8t%?+'V ЊK#ЅqEŝqmӎ'"҂ "hu6QU׎doN$߸E2{绉QiJ܈ b^Fο.}~ǽ'A[4\+%0m7Y1.Ԫ_v9ޜ#3v}ٙ/ٹ>,3g77uanZsycL9wW5 Hcs0 !]cRKiWuU;XHS&:2@߲#APGA_:kԍDaZ}!gG['Pc q}=kPH)_S۲[emiZ 1&qz~$$>Ht{ -zEM?-loׅ{A-#͐dG#̡{vf}cqBJ )壢hv\c;Ն^+nj=H\:0#=|wбL(i]֦nOؔ 蓵F]`>3:T02fvo)ʾ&6H_,@|}Q02_Ȝ7t T!Ll^f{˟vrΣ,H 2Mn1LĒr00Bc 500h4Q %FCc J|̟DZJDD{;R*tyLB4wQC w̗z=Oָ{ۮU襁;{W׽gœc~׻v$1Ϫ?v.@LI(ֈCu"P$FP #+fWVGss͔xZMWs77V׽Zj]m|c8[k3]בm|1 .Zb2 C7 =@*e<[mfƋFtn,^~:֪Uߑu{Yafg{FW< dge&FyXLn:Ȓ%#}IJ.u`X"W]U]]]}LX>hT˻qYr'1SLO`) dzcΈ$肮M}KO.JD޸A ~.~ۑ\D7)%` \6'WϮ^]5a3p3ֿssv󎃗Ȓr_\]wU8;DIwr__?yTsӅ5z~5s0B|J|\!EW~6@}hl?$|:o^12VR0 σDmp=T]5чq'mЍFG&Q͛7g~9O}o~n+^|MPs#ߜ9n&rhko!0F"RÌXFD1QpPt&I ys$7x}?5i}^lQ|G^xJwų7'^I7<yeK+VGi UpDS0$e%C:DF[-6–k1..C9Q ÌX)L`MFR#)"XER !̀u*%g)n~W𲝱@MVFgێ C"m,r@Z+v+͆w>܉ Y=}=LPE1Rτ磛^Tbv6jpE[<^ zyۃF 1l#+c0I:7g-B! |1tyt0fq{9D +s{J$:+O$3C| l ߠ|h%{BX0i2?.9 ¢\IK--ߠ׋p0}$<0$RB*BcʰY%/GJ*J80H8 "+b+b "s >MP0<.I`XjPE:#Y E), m6vh4Q?a"\`fA1<"L&qY 4"v]']'x8NbyE׉{ 1OPʋ] 9:yN;gv fhF<'AO](Ǫ"YЃDw0- [bBKS9g5ls^w>t{EO=1o<5a~uqo5^ ]? I] buZ4;JW:j0D?>abtw ӹ%Eb߻[ӁN+\0םo<%,go)v 3)Ǡ?/|5{_F|[AdKgeQ+ߘ\t//ek.z_n{e6 Ity;fpyW&/\.蝗$u@TYr &=e?DpInCG׳ f $>ޕgЍ9 2yv56QkB0.ߦݞɖ[=sxEô|t&S{Pu,<1N3u%M&éNNŮ^L/߯gp/_ҳ$vїzۤ=7hRy>^ &ͱպu?/T^3 AiPt`~||[:M>q6Y~~T ;_W~6{~q_z 7exnw?uՏL;Ϲtew%{Ys_A}7(~O@4x=hݴ|k)D8{WWO$PQx}ֽ?dj4d|o\G,)\c_2ʊ--Wt/"$?V _&h,osz9a%.4oΎ)\%ȷ;gZ52rY;2WYLz11nMYK8vӱ}|F3n7 YIfRaeSQl4H)#Iig7{`:M: -6bIeO[Ͱ,V!FXeY$*+*,:#:'> `;ۧv{n c 6'tԅ! KC18B*"ԔY<2[P c)- u"3`whP-a"#+*8B690y0O:}6 ~{ 6$ XDDQĉs6ıKXqACCV/ԒDp2e=yrhSKInqulp$!&f =ӜaBmbC C0p\\r[q>vCI &Xp7g+SljX{wno94i_t (>}3zz_l/S&j R Beo-Ĉ>ꓶB ™c-JMYgԬ] (~95 C{q[F:2VXM?R´:¶("V:Є#LDqtdDՊ>Tx(#O+v;_(Rw7I'=RiGo/%<@'oBȀ̆O\i|<ّX3m";n_j\x+)Q0ݜGGlƁMWpig9=r{4+5"f C4hw6w(m Q"B 2n-G%W Uh/,G)8C(-r;n%Jӷ}H)?68(p"YtH+a`Rrsdv!E`[D( voPPz>]${Q!'GEJ| TvmԪx1X-Dt.ZẔ]ElG>9Su T~ю9yNA8IR:j|{؎ةR:t׃Cӣm >G 8(q'e z'0u@(Tw/ѭ[z4{,bFmit@blS#~ftlMgҾdifݛe}7w.OQ:DcEIK05:ǕmׯׯQE)G&AZ 2G d00p[F`0E1qB19I1+l4,1GJ$P2 1A6Tjn "+P$-3G9FHQ{ihQ\Ԇ |[Z%gOt=ud%'$)%Y_k0RMg?:`Hq/3uu Zݾ/7g^Wr%7 QG|g3[xn!61 5:Lj+zdvlt\47x0gXy*#‚:ɣFzKʪ¾8G<*Ll0oCמ3_GĶfNLd1g架c`IgNWe\@8˛޸~醴:ݺW/+ryG^[TuS?2'9cꭇ/ě`sC%Lё=35PS}LETT>??_Z~١KjW9p[5iմ_rcZ*ִ?R!*ל=,EF55kjdvSYc6Vg*["VG!vw0zab^J@ ;2+POJLSmLq'~0bk^<?eE98e V_9o5+ӧ~޸not5=~I=0lwCb4JΧ `%8ÈoKG¸އdfX5ziG坭ߦ~#BUu_.i]*|!9鯏t4չZ,P6 hWH;J vqOeyU1b;Wո3VBRiۇ?!WBJ1!_m%TCf]N`V(OqmEXE9´*uf$qbKsA"_!z{=*JJEFlo9q䌋8fXk VDid$C9ZC NTdLwQuݵ7ok'9z]9?z^k֮yQt@_ Z "f%޸mݕ66 0Qs>0FFYJdLb,R, #AjPT~ૹ^shwe\|]՝wOg16&pu62//$wQv3&Qܖ~v}N 4B8'q iFQ(MY3&os'Pw;h-=9ưǠ}:p(i)gڭ6U6$;:2E0:9ɼvv˃*Sh3=M&vۆ|"ZGAGnnuyPEt>cvx?&vۆ|"ZK:vS(nuyPEt>cv5`ZڐkVЪmh-RB洛 J\UɃJ!ۚXkU[j|U.W !߹n'LPQ`*j/ߑI  &80 15SJ)Sr8ُ<5i}I YIJ |IxQ<6VY9xDS%O@ĉZXe}Q-Zr]>j I8PR\KB yLTKTKjG޷ɩZ (JqӀ*f/FZlmߧVL ~1a@e:.TtQM@qdq=gI6+,}kCڭMdr2)ݰi AG*}^ (U."!]~ D'*m\K)Ue7EٕS̗jK&a^tӱ)`> >y,i[' R&w&w|O}#cDDbg_6,pR HsA*bi)"pxi0b@\(b!b!1’X1$WYSq!Mj&Pa ȿ%% 裦_^ YQQ͵lW~811lfʗf(bS #bgf$VZc2@R&`#ⶦ| 'C˗EΦY7erbXoݪX:B[{UrMќk`Ĩ4%U }ObQ9퓘}ObOb(dFP(DQF 4q@DL WbWKb1L('cI}ZX ZA6UsVIKxH !̾4vf?`cn>ߍWI)&sg f͕9\7?.t7Wl%{g 665v>3?(o|7Ȩy5hA%)?ˤHL `)}jK3_DT+ 0, ̻]d]Ufު[|ȩAҿSfwk'9C@(]J PP% K{B'@=yZPtDaOx'w ${rvK=dM^,{`^-z\'?g赽 $9B TǑl1uEb!RJ$S)- u?81PiIp5Z75*"yQ#Ɋݓ !1:" B8Xls8R,Dˊ#Q  MG}#q!q6bwց!Dii;GSB>Gm?0ms#c0(0($R՝ "$(ġ g[%6<HP4"2a8 "_u|[Ұ2< /Ӝ#7 pCm| _?Xgw}Ǖ ѕQovS9o]r7^f0\*-A rљz<&8%2/hZAA;hf^*|t&#ζ`QYr_hL2UC9)~ldB)Њ4 !q%Cդ!֠%*#V1XM1kߙ4@/)a_1߇x6K/f( >K*,¨lf,C\1(S[xd6v,F{=L &ǑǴc|+,I!"V*oy+li,#)h'O몵t/}$ (wW̘4?I$o_b4pEU kn }pU'x\Nh> ЭC` h>}(`Bnj_Ҟ ( p+ZUdƫ_AtmZ z:&#QXЧM'פ}::lQRD7>Jo,0x53 G}|.Br`իVHv nv ^#'0نs3)xȩLv:}'d$y*zu³ϐ~crԀsڑYd|2@C`p'mAR< WfNJKV\i˙e]~wuҝdB7%QxN-Fձq܍sddI)Df*(guB彪^1}>7h80[[6wjFq2fqR@~:I8ɡ_ATWQ~ p2yQ8Lg0P}:؄i.Twu/’zF5l;kT!O ub (|<;/S7ٷ[D~0Q6?=w,qtܼa/?>sU~-rꁋI!xobk-,/˷닃;.0Y6UYNw)<[O3)4t?Tw|&J]uHXGlꎶB(d%u(;\ttD[-o6ՀT1J[7܊ ioVT wd5fBNv@_\Nzx(0?Qu..C~]~=D.*+9_IuM~d>{?T=50Ѐ9i#1uՖKa0ZlSt;z5Bb| @G'LkuJO{#Lq Y.Ii?rNgw~fA֫$xa*7%Tf׭EIʵV880$'h]uٶ5\u_mO B$>k۴IG @r{.et#+&QwYz !"tgsfBj,( {U1Ț>&bOm#"i T/%(mO]pV"ˎ$N߿Q"!3d,E,͙s?5B#ʃ+ S&j\RDut!Wȷl7+p"z(F@ \,l I p8R e^<1.^t:[yn|cS{RmwẔ'^t xh_˥~W3U;e;H!cBA )4KyQ6J` (XĂ%g~jVOD͆JMaA-b;fQ+3ε?\R\ϯ[F'ĮڊPP*Fy5<Ǹng 3?2Q/ c@S ',Br{Wz.%V>(KqB chQcC쁎2[5)3<~SȸկDBukl8QkP%:ki.aR%.ض?~rA ` 8QT^%{3\r8{Z1LB3 6QAddzj;pDH]]nejM4V+2}k}JݨO1"M:3|ϝ]A "aKϊAB.O+V0Fa t#Ny t $̚^r?uǂ L4fRԕ& 5-jH4zJ K[Y\L">/ecb z%=F"\Ij&!v5S[w3NyA,:b u$9bGDWEw0DQ 4K?RE~JPw@(vcDrA`sZ^(Q@:fPjYݯNmO'0aCjmdsbF F* 5`0ڔ?RUsٍe-&0dwT@(Ź Fj'B´;{' pXd2KU(k^RZٺ^O0 v4ըKs?iR%Ln$[1S;(vlqwW-hV?"Xg0;kWRTֺ J0[y.5F  nlӚS)v>woJ 4\&k9!&+nM?ĉQnxTFkn?=3H I᫺>j x9e&Q"?"NQu mg$F7D~4'Jvd/jsWMjjzjne.[heVzbOKߧ7WP+j0vknt|}zb=d'uv,-Gw}XbZnݙY~W|yBsT9R2߷{fFsĨc0Hr H(! (@H[GE ߺgLWZ&Ms)-tZ3flaY^`a]ʝ쾿:br9`ɝq _ߩZgw??X1CEa$`QRi TTp 3nfRRm,+ Sx^NoGԛ4.'bbϭӔۥ/Al>Vv1Nïʬ6W]1Vqx}V@r[`Н^sM'#0o<WOk&$L鎃8up1V?\Hޖd u˸' X)K+qF@yiot1)gY|ڐG""XG 2}rc e+|nrg,-ϷVzmvTSLz}9 4;]}[* ȉeXXH!1JdLqhHQ2=S0`j2 $Α>Yig:Ncx>>? |P¢@0Wݐ2)$IK`.N0Jrq(#x}j\9wGirKhBP X)M-2q)1j )BiwjQ-I1 CgJy̮HQKǵHN!'K @E57*Nz7Ȅ6s<[ m(KP=lG94JYrF03 !f0BC`0(I=m?`16xF [exgtƂuD?/!%sAnͬzʐDJ.3Q- ^V*:}RԸ2Z~9l/ ufpwtgϊ/u,-V'ʾ5=,OLam۬Iԏ~9ihvB{WPjn[nsTZWZGݓV9^Z\6-ndvEs<zpٷcXe4:ѳm'5YVUh-ۿ_oakǿ[(YsW+F%b߯yх1f$wg~u2]VJށ~\eJgrBɫ_uz{┨=R_|/Z9lGY/JwW﮾j~;٧~edG#3ȳ%s!fe9 8-}grXtri%àn`ޱ'wTHx(٨tQ#c4㳡E =F! 丑cfq)9HY\g(`gf5&p5_ `$<=m:]d9Xu#̗tXXx>0t7l M-Dx:ggQzƗG_z J`? `ṗu|ja^1^3A;Uld>u0OrȪ5y|`fU]3a71rZZ|XṀܡ9u=C62pZ1&CPB>χ|3楍{YqJ2ud+<݃ޙS6y/YxfW_y{7l]Wfe1v2].]s+q]NDi2vr0\ =nFÞ ;я=I]4moorjo;eg>/]aPesBAU@ghKDqAuL `!J+q,"QQiL\i3(Le%hW媨j.? }b8 կƨvl8(&ˢnTHa"ϩD`I`4}v0HP3Aߑfp׳`3?mi@~,g 6_/ V/?P垡Q~(;!Mĸt jJٕoD &O@}lFvW9<}dI^hOJ]Pz4䑫N@" &Y/O^o6ּWL +(W ϲ[ eZs[,P>tl˝3? 0Qo4O:A~1m9!zi-x X#B 2 d.p w_b=JVe>N  L%Z9EF\u,?רr_?֨:?G_ЈhOjTʾ>}I[fr=>EE^`]ֿ۟IQwo׹Rstįt4؎ ɱF₈5o{Um/Wo %zvnc|e]M<ϊM{oALdƐ Yabx>E)ZѕjXCױ)1:D;P\_wj3~ k4 JÂ!ں+V+a';!ɄUok76O*Uy!W>#vƁPjin@\&ҙWau2Q&k| j%({ۿU`ArMtC"M&۷l0▱$$%ㄷYvhv4N1ny$c$<^׸v|*=9W6_y|mU6o`'0Α[F2hk c$BXFJFFX~ܼ!+,5WnӪNk?!]nbo퐁,/8RTPP딅1&Xl"` #CD;'P 'pI0GR!TEfn^o mTY߫Pم2\Rw(46D)BXé.2XK9b,&&kͨ[EzD2C|h X86qU8QXqDQf07Sŕ}R3GK,g4>ju^dm4Vf%52O\K̸_͋QPZ=szGx?}M"}HaݿOi;EMp*p_:NW(oI( >~wOv9QEqZY'1gUH)1qh ;ޗ}U[?M5[Nqk?9}9=q9]փ3:G{tE x6xv9&6>z97ܸe}z|ݯ'Ћݴsvf@ǩXo_(o_ G'_bѷ7n6;(W/zQ?trP?]~_ҝ?8oϟo FgAuF4@9 {z`:@t~Z8E4Ɫt~*`x͛۴{قWo/ FWw !*gu0:?hL${2wy7<}ԫz.+?0grhoQ|iau~Yxp<ceXϿ!t.Ɵ4S a@uőzK?7c-.vȘ7OM^Z֘Q-K|Xr+]~&.Z-${0Catl6h3s?üҢ8E>X7yk[O-6pgla <NgrZu’cCr7r?'o'w^[[,p8p6wx:Z^@T".8VB#":`D~I$Ik# WÇ{  ,6!X4&nX.lzPP{&K1ЪA3? §5]Yb BJE0ߺ)\\ %q fڌǨapQ*Nbe ZNr4ېlMDx:Z hVju$3mĸ#mc69K킫\.qVm ~vDR ?@p>ފ I,v-њgq>q>q-\ q <VŹ-_y8 T]&jAQ<]eCr52RUH.oڅsӶCәGV3^ܕӚL+ȟyݹK]kp ggtM q/;Jjuy^vӲ$ [W}g0o> n]ܲI;د2CGq&^wnE~O&XA+E BIllB9jCPO̊m,)'rb<;[y1`zY1C@7뤞!juR %Er8t8 )"`c{&(6(6l0zxkYE=\ޕ57n+˹'/˝qRܙ2..YR$y&sR PdH1VME4c HKm>hqۺk pN'9(L\xO@ Nc8>pΩΞ-yM9ӑ"\vlm. ߜ1_gSZb Ԭ җFCyQv|RhZs2TB@ 2{}ދ9}*[ue_ KD3ĐsYrYxsY[}.{+eck*a)8;+X IJRΊ ܛ}gMླb a}gŎwV$4BrLh?Sa2JMy_GҼm69Ro'4iض HSqr>JsCZgr#hV}XX֣%i_%-BzL7S}8Jv2cZR#͟ykjQqJuZ}NzсhHBh&a(Df$@cIݿ}>d> zM‚Dd Rᔓ8!d$Ŋhݦ8P4NX2 `dPW>%tBh]Qh Ґ C (‰V 㬫,RPO VdO*SHhV  D|] >>fV(AHoO} dXL&a:眸lْrDݲ==@"cC,$*TI@9IgqޢYxB" i,#y 2 .P9J(Ro8DMuwfՄOx3)[jo^itЯf! V6hD0$ҫBH&OؓC1kr(oz"^L>e![̃ULfb/*Fm͟bq2)c/aʆVRrc5O!T LEz$$TCєDg,82<-X$=Eo%cSӏj3k*a֒7.b{ˋꔄCGrgeX1pfʐԧ&iQL%2gA( !ӄ)N Ҵ'\k8YI X%F 827vO9!cf?MO6!Σ* +\t Q{\;.Lej;0U|(!ȅg>}`cB,bǽ}ÊG29zsgܙ7w]Z9LCE@cťc)PDBI*$gQIJ)2 ~1_E|I}w:Xmo=gX_y- ٬E)KqFcÕ6u*Uع۴ឫ46|H.eD1h-4I'|MJ<<2:nRXqf9#3 d+';TN4PY)RҊ.TivSU܃nt74hQ:^d[Ls!bwpQP̒h9/6M2̄N4Z̦V(؝Y^epVNGЪr@Eu@h/YZóNOi,Ո2B[:P2t^=y^A=xmUvp!Fw+%s:1A2 ߸|d\ܽs*]FK1 B %`jZmRuyCҌ'",\.&*ݟ5%K-=ۡ6hlԪjn~_VO{TyM2 ^Zb1#0LoY?'xʷkZPfb:\ Sr-L9$r& *Qpٞ:61 pyBSo5c?2.zy0c>Ȝy!z0HG|oy(Z }exa_ZeA鋗y;H)PC5gcI8d8!#PI1N>K7Gsӵe0], L&hg?:{b=BJ"LTyD%"1`4+'QQ1<;z-?z\03<4J/ۇ;2ƅwO6$, Ϡ2b +eDTmw EAz2}<_&^S C/ 'ҢVMEp49I+^ Gb !m`^w/O $yLio`ޡoi70 ̏60?g{tyLH,#IHh B QgHտ-ޮyRm08ѿn#6FܯOc ez߳ ( $iKm#T 55SE) ݰ7|cyWn!B'u[D x^7=uU *ч!@m6S.9 N DvҸB] & aNRJ鹏2aԆozD9$N=W,t̴ʜYYj!ZEo"7͢ _$cpf$ą* 1$f$႙ӧSC2VȜ R:*K~f5epuG;*dGAņӥSh#3?z5j]mep.Z+u‘w LLQď*g蒢=@rӝt$kMM{M3ÔcJPɛޙ4c3z_]ZlY(*wg`c'4~B'4~r-HuL 8!PqcfJbI,K4ڪ˅Xk!7r~һљ9{ ȿk0#Λ;蠐sxiO _}>'PQk!s~{QĿ3R4I>ߙW7&6Ѫv }*"T~AAfdfjJ5ݾ^>(=_Jˉѣ"8B-POse#W ˕2ɵ{30:_u7hf4-f߻D?o&xhn(c_7Ę!ZMRE7ܰ;WXmp:k6l> JAJCQ}*i`PFSrZ*qzO <2([\5ZawӥEPxURm^KE,"i~F"k; #emmvz3c0˫%G?Bb1HlqڵFK+<J`l_--,0^]#'W^S2kd ]¬_S{pԍ>*AZG&HS{V"q`#p%p`I$錒#S/B%1nɈPMIDL sQ|J(!kGUm pvz_i va@ .}>. VKFUE {a>:jf]D)ڐO {@& ٔגmD8Կb.Y] _ytCp1%jv4x~aLE;ST+WY !"i8NxDXXa R4iH*b$qa\+4_ 5EkuѸ'Ft{:u(O 47-LF 4hF`: ֬+7jI bb1eO JԵKgͺ_1Si S)Cqi&8H 8K+2KDĈŒcʳ,K‚sVD(`ʱf@YDHQH* Q؍ǸT$sK&'fTGT/EE2KD - C)Rh$Vh%eylJ*JFYƑLH$"4cZ1KhFEQJahX7JZ^<*HM?a MFX HT[PXkwX&18*:9% G?'I8$(S=,6NΌwx3=,|pMbo=6/fKy[ۚW?\z=-oߪea\WW C0dZT4^>:HG8X&`!'x2Qrh:1u=pυ,VҦe2&L[42?53x?ަ{͎҄;nDV}Oe?P3hڈpd fP[dX jp0pJzbdZ Zbw9 u0$M X}pHt5h]#S1<LBHXY6l|NȌΑA/V&4 SU-Cy 9<,L*0mZbQO´awbiY B @V'Șh<^٭(h(H@*n(z}mK!J|&i14 5F Ep*ÌEśG[(j9V6'Hp'I3AՈa['[~%cA};Z9']^4lטi od9} + ͜Çm㥣7K}_"BS1f1 !(2a•1GR!qR2ڜȶxg6$)}- }sl6ӝQsh~'Y.dz;Z(Kq#K;+&9Fb3 |{ad?vyavt??Bx÷htA  V(TT{i= ׳[ !MUUMyՊ\UM+Y 0bvi+%M#ZU?C_¬;`F`t(Y %X1]Qq ?Ϲ 4 *`DI0ydn}L!.̡ȝ9seLh6(ԝ3Ais~UP5(Yl9'k47(jd ؙg5햊-Kr[x]#mgZ6_aMV`>rj:+I݊}Ux $Zo E $3}ffI'~hr @eD{i/: v)̷cm6[[fhM/z`xR1qf^MHZS;)=I8JL'(n~hQ5Oӎ j, }R"[)9U {ZksKmb]mb)a%7 He#?%Z\ֿWBpci/?;#a\돥e`yV2e\(A]`hxr:(GoӞ5 uK~_( cr͌L%O}4[3AV!q-SX[}Z6Qy/iy+6̄(կ׿^a[>?Tߝ;.2+p.>|Ye߱gVD]9F}~{wʧyvKT/|t?|m!Kju8!^έϻգhz;2a49`M X?{ea.277}_R#Vк| nGLߖ5 _*RhZ"P`&i)I+mqRDѓ;dՌ> =g9S?PԒ(a?e;JeCƿ8Z>eU5lZKbRF(b To;瘔jSh$HhuL _hI+)&s+{w%QDA;RxXآy&^3DicF%Y|ʴhKf`_UB^!Z5B yaI[ՃUJAS&y^%նUʩDLX#1 ۢ_KzGz/&iwY(9R2" Y#km2+ڛ#)򶊎ol: rt@Hx)*L;vBs;*zc.^aNaő:^CW{dP} W:>QsPg;fnw6ryZL<+8m8Z#h@LpZbR\ylfj%lEyHӞk\!SpSDK13Gyݎf]a\m%IZI;30ZO,x5DvTSӏ*H!V5@FH0-^  >mS|DMEQPQ!O lV1rs.7Ktߛ28'5UFǟ6=iIŠ4\CđWmPqVpaQ^B T$"3h> Ӻ KKrB8˟I~" O%\b2OZ&eK*Þgx@Ak(#b YY19n%/ږvZ}J΁Tcy3Ve})8R( J)WQEh>!#3ÂRSeRb嘊}X)35&aw?~ȱXd0WSZ>As. ڏ0 [Vd >~6'wĮ`qF&p؁ٮڽgo?>.u2v}4}pI90Qa(p<Wc * r-韼twbF߷$ǘc=hRI $ [g~n1hwxf>- $Cej#g(:6&qήF 1 }׿)g{|NpR ȶ3>x|wɨ{~ wθɝ+"ךr|`sg1E!B)웤y#vxў ` S,Jb?)+@u,QZsKa2J#:ԵO*$퐀LЬ` b:sP^lb98_W$MJ4!E;wdK Uw~𹿼g*4rcti*'9JpS:> >_jE&I[) huAޚL3Swl0|6.^NyHj4mpEA`VI.{դ]FR/+MvJa|ܹ6&HU2[696w%;H+Ygwf]5qn7NOØ9z=ًmq|wZEBUdxq y7)ߙד<>pༀFݳ7i.{=\+D!լhl,xuԏ&{Arɏ{tF}~F6$A@Enp==tC="@K.ewÑ{07{u2yk8l\[dri 25gs=aQK}Sd<]=R$F eQE}ăH RK~/ތ yӯP:Is>YsL~a//z_|VDo+]#o/̟^B{x8~kW#N[5??c d=p81 ?O~*^*SJx8%A~~Mw0J/qNhGP.2پ^qw@$S77HO65pr{=- љ/|MLט8sȜ_O-eV.iYâA bcGލ>nNN i|aC=I!dOݓס(z<ӱĈ͵8tyYۧ @ŏM * Cɟ;rZӋ|v~L) `Gni7. n\o >_ef>Gؔ;C@T'?whwʔA@bN~;ܸ'Kqwq7Ûb\6- x>_iw7g#}CeJ!@: -my:!w?qivX S72;\F񏻔x0( *DIW>K*ڏu0&1~F ;{Bh1IamX0iR3NAAhRj.CtG"-ŠEE!K?Q*&u b6-xMdԘy5 HE5w|Xk,/GVng|ט@y0}\$TTKpDr LH+e 7lM}􂢽b2+F~I/9Ba#j9ٟA E){ai]lbS̼2}Y`$EBGTLTI]E;y&r(z-=AA17ehlǀ0ɝ%RbxwH["ACp*=W91"Z%1iQ+S1=3w'P[i'-r)ky4˽tѱ- H.$'^rr'xYĮ2;.$Շ)Ϟn|W7Ep&|ҋ+OY '`zxO61&A4.&u/8ia8:&Hvo+B>aڜm| |lI.k9k@*Ӊ72~z2xq*b9cS5:7m4HlJ>(Md)D'/l1W}uF0nBVFj',kPU[ Kb&5!ZrFH7nSЦezW<&O.`%'f/3-t<ܾFQq!O eÍ;8t+?xU,=0:<+XbjwCKmuMуOA8F@jᠮ-nJ;J&|7I!E(Ds8PD UkQf~85fIL,\=gC&$O= 6SPz:.{^H ^Dz^ݬe--hKa)7dVFWV*XP*[6(fНR]A>4͛)RdRJ89ފ}s:%YЁj/0GݪU8TgjTS$7iWGU,Dˁ6kJBwdҾɦD tk.YDO H:3JCsP^ A FcM=(_h1\R1[hW|!1iaP(sp ]kMV`/Hz!QzeT<|'LxH#A.0SR`,P%OR)s)!@!"t?.ٱ5dO堢*&Lqo~'v=>&ҍOaeϸP5϶d[vsd6N[zwʢ+w{Ux_ɢ&ON *P,&geV R,=> Bm+xȑ=)p3P6Psa'+S IOqw?k7pov7ETlpWTpXhF;|0 H+좃9x+9'/&@*I,$ĥ ߐ (F,`BkmFEȗ ӳ촻E@ImbL;XȎSh)QsF"|/{QX+pC_1qkln"ea:71UeC^LWJ,;>_;C K9JX=˪ СQ%(=W)F5T?yg7~%U73[{p``wWg1Bx4$*Ry4u[7W-;wVn{Dl4:.#)3.QQN qAP1:=j`^RKmP)mpT*frGʞ_pdʎ:|7ErçO9\FTPA3.0vR8;Q?.X-\n_0E٢E(ug,fyx[1)c߯~?wt Ue:*ӔT*c{*c{*[{CM4%̐H"yGT#y]~6Q?A AI"T2}6w۱2vC[;˙Mrqu]:Y,q/ꢊ2a$>YAloe;S,ɍip ^w5({Qԅg_}r]L1 sLps;ynȀRŜE/WhNJp٫9Du *J折!?F;fyn2o 9゗;IPDq3^Fs %g rU"*R=iSa d dh :2 d2se2ԋu/ר(Sy+1Uh)ϜEU,&N'cɅ.+(( e[s!bp"xj2gSpQӢBsnnC#1!$ ܆F\E!u o)ٜSŶ+LEU}ݏV5E߇l<}Ub/PaPjʸ1&"9SjJpNk2|G' q_=}Oj]@_M`CVi: |J'=Ÿz$A>3Z; RbMAB@^ߗsmp=1\+ݦmCC?i M!vٶဥ }@ 6J_;c-*07"ZH\Vɑb4[ Ɍ&dʶSP6T?a7Yj4O9)tsm2mJօLuf9) g1Oӂ]Z{UUZ @G3ٌ McI`G2tnIecͪ݌5RQDQ{r˒^QDQs[&DqET0=/yU] G+*jERDFDi0 QDQڳvj7SXQDQ`$P" {R+G((zUQ͸`<:zD߱,*+sBX?“ QF0ؤ'bۢjxlCNDPDUWtSDqO*xٵ^E+*jȅ"*bxse)aD@U>W^t>bE#.1}mi9xWLr,#@ 0I1yf2ƠL gJOnS%iր2AF1Öp-,N tpêt=3Olq@"B1Vsu,u8701uLdfY&2/2/2% .6KM6M\~.fl(iSZpZX"(N-6HV!iQ@"EFѝ zW{Y PʤHMB#əURf 9,V\e)iW`U]XZ[!G4)K4ZQ"yΈK 2YF@B ` ˓|^ː @rC-N@2$Ea1UT8$%e`u}. ,JKlvE] 3f~suV'v5PO_.ꗑ~]^3$.F̈q!FO*Շa Y7^ ro>k1/W~q­1$gWYrK``ZgeZM3 >VuA5 s*BтjuWSu# } bvafNL \H>g';:[ 𧘕`i%nj׫4ukGns`嬸&U-ځXp F!}uGخj+Ț._쉭咷32bA E:aݢ~nQ\ RQP`'[7L/Pۙb"`gZDy-ohݖ{ɺAsjlG9cW h#Cl#xoXWUum#?:zpj$}3v{^s[OUNI;>qZ7𣧎_k˨%~kmVÏSmmvqq6qzOckOrOqp inc$O?l$$?Oa I&d:gAy0(v%SA L!ҔҀ=@&~یZ¸2XHkb>3fP7wlܹUCdUV$SW?փ"ϒ/~]ڲ멥qԝY͗Y.y9~ˤȵ̮Aq~zSY0gˉ|6^d7cCG,s7PiͶN$e^% `|\Y6Z"+|=*tyA?g#>#EpÞ*~8Mto x©_4ܵ~49 W/2ѐh*]0r5a=.m4$J!ӻ 1 '5ͧ/_Y_fSV2b~k7!$Ń8OB Ճ5+|Ivmm{[goD/NCQ8sGpsߞ|ܶ~N׶ ^"t`> ;=rtj0C[=u|!>b9:TZ헜c l;,+=ї釴۽FVnXn&!f:I07[i,rJVր ʻ8OA`3(0=0Gl l'AaWÑ¡:I nBa')%A`|ne~\Ӈ✲&] m~d.qSm0kirIOz9>Vu=[cm =wPEe vrӹ}/1Jw_ښ[cwnKtVt%ǒ {Q%><ӋZ|xQ#+26_︣5&Gwfb>VF>,̭Y }c*\ݐ]\- o.3?Nd,S[AZ #?]bw}ۊ .i<-Ԃ"l ZPo8-3RRthTdrTXHT8Iٔfx}nGG#atk[HPdQo"yԝծEjV|byߊV;="N` >Ӿ9ENlQvq<c 9] mM`&{!cf Bך%Aw6Q >Yma剔rID-wuL_ۄYMS/@l:󒲞Z;ʚ2JK@K^لT0Eϼ=3&v:Ό )=3&WG 'GY$^0vСC]tUA&ysD{SƷ֐oQۢrj$jN$1.H$5hUU+ ޲qQLI^<wc3ij?[z"1ٸQ y$yG5Lo38OqHν $lȮF'Ɋce3߂:oEΎo0G#k q58'?ܦ(/ICji["/rp5/^v;I^:vHφFVڶ Qe<ԁ `4,Cu0qmӪ"5/6c;I~7UgHy2a~F' Q=\c6ks IrدȋNiguREY Æ5~ B(WCAOBWM3R~7,?##~qT/X:Z<:c{VzJG', ]u}R}s:/O:W;8tPQE8 B44 uJsI"9Q^XjqNǚdí7W1՗ *C1ir2ԙE̐lLv!_=qsr/ 6Hc'  ke໎[OsD$DKg7ܮz}h =E$ ;vtSΨ~SF,0&3ڜ1~Ǖoz}Z)ڀ^v%0SN߹_|o/QR Q|~;GrLo_;׎:ū̕k0F8ͱf1"C!\U'qf,ifӿl>L en6$_keZ- C,.Ė_E?> M2[-V\0W%dK9ʍK]ǧoHxoQR]Ѳ(3E:e04&HHUN'mHH"AރWRvر#">-vW=3$b,Dz! g=uWuu7q%"FF hX:c$.V?=?C@U%+k]/nh4^ Rm3yPtYސvه^=ԝzs^' 35p{od; O=tU~#6k֞=K*g;irÜ3UI0F=^DN? `9SҊ#pl1Cib\1i&Hw`K* H.<_ʸܙK0LH k)9$}D J XBw*=.앶Qi:(Gj'}Ĝ" a ;ǃ AA?0F qpW>{cW9oE1WD>(JT0 !3LY)3GPD"f<kxIcpݺЊTfAmt 0FH+6hc fp k!"XqgݡV)#"T\°+pu`08EhB蒙PFj`#Q2X9c$ ̣BY 'A4XS`E =8DI,+kVTCfZǗ//ۭ,fCvK#=`UUwvZӌR(uzAP(J΂94KvH*e $EZZ+$򘂸X 1f3rLoh9\j vWqzcdEA90-I`Q ^'<$2hDET6\F(d Ɣ39V9(qa 4L EP0F*BR mZb#˥H$LI ٸ5ڗ [?iz{5ѷ嫋,HF΃`*s ``h-_ F4%ҁ.1r aڃ2`QH 0VasbF[EqT >07G]g k)(5QH IY9APKD`\#a>'R # S9 -24x sA!JtrXs"p@(8=3jGV|(Q@"4Aid1 EINl|500tWӉqoOrfOm&JcPnY1پ#^K9Z^<~: ws Fbo8O#G彜OBe܌ASQ#izyn$4y6 7۾pB$^~uswʻTFl:]zAW w;!aL 5U֨Dp]amNQ!1ȟ\Nt)v1Qvsn^]7Ŭ&.6J|0a8wqLSH,EX]16Ny):e6gDMDt;5S8RCMfOX}9AXhJ@S%]Dnk(T])@@!Y0<؞^Wuhbޤ u']To$N ZO;bXT_[ ϒ){y3n_*TamV6)>gn2 `Kd4'iŻ"/_ `ǧ &.k>yoDBGǎc"e.qwhcnJ]awT Qq(WG>Vf`ZXv I_Vz~Պg8#+ mg2Nk'OgY{6f0-bkyCHF=fr]ׄ]was|i{忠UkFJ=lW0a=/f`0cW^sua¶{^hz~JJ!şRJsq 7/?H?RusYEJ &ПφF)3'YZ1_6N +VzkZgW÷aˇǸXdҔN Ync []m%RUB`+QPlWVxt1@ qGL*!4j4^|k Gh개~iJv pֿMֺ&}݋j F\%iJ<&*Dב&I_#v- LyeL*2$ Uy-W,t}%L-҄U\7[AV?çK½DѾ%,8'uN]Z_!cnfEA`7V0+焍SYU,z@ N$dRNoQM3!BO[}gTU]Exq]HZSIhflzWq➒ZĪoV[ Ն/.'!βsuFcGl$Jy^Rh,dODY{-V}4KH9piy;JkrіJw [{.}`ZhCLٙ+AJFeX0&̈bh4jc`S p$D(s24#2_W*3)/nJfcMI\JMiWI2rq ONkn%iTLGG6ues&hTW 4܆H:ɢa5Í!i^bA@vGD޻ !yqb "p r2}P@'N̆ma9>hH#\]'y|D>I*!B2ɶgōG@Sb dc5r}cZKԵ76SP\ FS¼(ښIDI?%Ykѥ )9nKnS񄋍5{0J͵R~7wxAZ̃}w8)s((3J:4*)E}f] #A6DXv&X_)G57M+K蛉0_v[iXZஓJnӱyU0FhK-a\aQXNյA %^GxCQԫM›~6V>n?)U E")qr?@g ^?mYM.:aj,4FDpǭSD:Du Q `-2"#G:C-bQ+͈Ƚ F0%{d"gJӟ9j( 4aG/O?AHG9F>ӻϫJϫ?qHsSF1Ai: XBCN0`ּ'a_ }_Ar|g4rPt''?,pؽap^̬7z;O XM|&0ӿI7>[仺9:Y8=3GvbfVEGId46o9eǀ&hū \m9h c4lHxz=Vs ֍iYʛg┒]o?N=Ws'`yq?wŧaroz̀n>;.h Lj¯ޣ{sHx^(n|87cc^lMr|~:N*< eKPh4]+ =n!,`0zxeL[ xUտ&ٸer~l>x RR,\<}^h IPҾ^/y4?܌&w_L@%/OA_Oyj&W$IWE ¾uZJՖ!<poCEU*)]v@J,a|}Кtr˳G9 TIEcr<9RH5˻)V+<r,k|Yø.]T"=I͏hR3 %TV еST6pɢu$PF1_w֐,wEp {/ŀ/8mlr#v[VdSo~v[ȧJ")R:^RAy)W pP $TXVRkK@|&GlA G PZG÷_O;Oߗg/*fu˯K]?LḰP<.v \˃\RU:pߥ*%zUQU_y_Khlq\T.beѭ'/_6?Շ'w[TCQ_=Dw +Q#I5L.nރʌS*ۥȕM留|E QQI8[ 8bogTH>%HpHA)Jsx2jbwwP;4A>Dv%gw(BSyҰrBe[?62&EG'BzRӳ4b1%I-DSm U,NqR ,W/=|}嗜z*IUUiR1D,qV:XZzs \jM~DwCk7SŞho9PB'#^f6 bXE`tMˍ+JυX!d+G3IHpAW-6`J"jWq8LI tۧA)7 ʘ|fRZ匑ʻ@Kǔ1%52'YW ;K+5U&VUJ#Lk\)mg;Ǩt/iƠSPܱƍт{!6^JM]Pt8j6?BVu0ڋskצ˃ջN  1y _.~ޯy|Z=]+Eߦ-Q QO KER!?KMڏ5eF~ꇲ)=$-~b[Rm.>VJw3~;MZK9!*M\=1!BDwk޿ΊqsR`7`ƀgO\'Т;!bj7hT@&aRe4pT\A} E ^Bb4 9/á 1l]G Q+Z IΎ_6xwDN59}hoz}|Oe؞v5Iю5EQ؃kS6/ ccC Kg?.f1VT4T%4e]OeuͿ( @U4K[J}`iuѺb:Hn3z_H#&nihm y*SsK8D| -}Fus$EYH6r)x99foN\:hpൽpbGQûŹ\zo}EOb[i`%3V7b|X&¡H,G$YBiѼ SXYBe)Q6̳c9e$cH5R_g*h,d߉b.%fm("ŭev8W+1O F^,|YaM|G<㥈Kg%<5+C-UC ʤ T,C5 2?+}donUlCo&$`%nRrE) V # d%(i7**+B(?Rl?: R=CcB,ezUpQUNp+=QD 6>Z~9 D;m3yhbu*!?k~odpkg] 86w7<m77W{a%Ke 0=`(iF5L^*(e]q;/Y9n.^LI j6 #F3 _EGr2Y. y唭1yH4H)a=9$QH_v*&R\Y&&;l1!BRr$h=~ bdիm:%&#һhb2<´KL`"x41Y#GƉlϭ.$bnݓbDzu~B%ET`ZbŅ#&~B1!td "&qz%W ]0~< L!&^y4e$9L22B@eبH U7ӗxǚP"$#a -D@,FH"b2~) \Zioo9..MZ9۝vՓKۥI4'vIq~$1lҥ),!;x&G!m]ad = ={3j2bmXG->XM/fH1jN%N-eƋQA(8;|;p0yD[%e!ԎwH;~My[Wdeaο.C AZ]e:H-߽+{/YSk31̜û? \(ϤP}D ,.dljX;/6Vv}U͕CVxݳ? wX^̓j l] rWk[%)QFzŦ!y7he N ӑWy|7YoZv/]H?uGʤCp@y=,x "Q.yRC(9 KFP)Q+&`n%O HNHM`1PϠ@"Bka[{5ct)ł6#_1hhM%#;r_ODt^P& NOy `LTKjWNK#x뺗dxPO,,EJi VHitKni`i0 Fa,RF:4$)4`RLUH*Tx Al#Ĉɒ\(PV;TJl 3XjDXU)4v\VJyT90+Pxۀ P9AB\ V҂Qg2D ΂A9F a2y/0AH˖eO-K@rm:YOʯrz_iM+ 6!.VXU) %) o{8jq.̐p?q0k]5fVK^08! KU0T8ހ Abw]V Ʊxvz%ŌRnzluaG|bJbJYU޲bC,>7tB}r)>92O#O ޲1F-s#rE1~8%#xbOvXBn^ ?CF/(ٳ &z3^ia.-[*||h0jgz2`)QVjʹs!K-qHkJP%N 5$tiPd$wM+w? NDL~l2r.S4s 'dyߋ%q@h%9`pq-i`xI wP#8䤋s _28OwnRf3)^{GxMDd1=LTh ,0I5|v2.Uhm0Ǯ (P%*e+F-ge\Xy*1ò \0?BLs, E8{>aa+T'g=gԢEڶiVÞ$𳫄N5UHT eH!.*5ᜍ <<\VOQKhz׹u/CnCh+W,j,=>nJѺb:Hn3ny^-0(Һ !\EO)BmΚkls64 QXVfA5X`0 s6 DIbJBǎ iH _F~w*'!7 McawxIdD:d!%8D12)j⩼)(|C+&gr V4R=e*LPBI×# Ru"6,G*_IhwG+U/ LJ${;R xh qR1Hyx;ޜb8V^5lX/NNP}ؗl#/ .]S=$WÛÞRjWNw]R-OEo.,'=]rp4 GluXu'Z,,x”[#l&Wd+䗮L!6mpzgK9 ,f,~Km! )},l! ;R{n◾=n˥hp=RMQ:2괊Yl֨bI#9Tv Zpҫ eCýU5BKk ې&Uj{Gq4ZRk(Ts5HQ R4DXY RCpzefP̑e LLM*$:\9a6c%%$4R@$ DAA- jC߈9C߈F4Bn~E#(9@$i3[sg> W{އE: uc7ǎSVpNCȗ2︶,Kg%IRr]ZeCT:%jbj7]@=|3F JEq6upw&Rb_X;h&+FU-)!W t֎8y(+_ep1GdXsJb4ab~)0g3"̨L֖ijx< qD,qV9:\Y㐡b4{ @3V5Yۆ&XȲoة/Jƞv^ fLsx>=L?H/S~jnY0rV41&7kj嬒 D`j|nM(0poGblvQmHx-1HZ鍔 SLH8.TXu$BbOH:H@QC6, 6w@'^mAvv '"dMԑg`*HK"5"a]7KȰ3Sa,2P} g0ՇX~85Oxm<,xLZ$3opj3H0q !?z"*},̱CrԌdZ[KnAy߇αߚ 2 X`B pΡx*jyw)ls @c;]G1>䬒{pp|/~㰁&{ '[.zj96a]$`KQ[8H/!-3lvĎmڨ5CvKB!3F0UʀU..1$qx+ !B SB})CMYLq=%2Y[cPo[ D&j$X& Npb\HJPc(% el$Odk:yRRTo%'i:a79C3ܷq9saS˲4jY޲_* JiFd¶Tﰭl'Ƭ^ kOirB#D䨅b^U},,?)'Djt7dMd0*$go5zкۈ u{^6uYawqv oMƎQc \5gqKN` ];|м:A$Aj;ot}"]gt &=Ŀ4@NJOⅤn9) yQa"řsf҂D4pǁ &y wQ(Q@ Q\`8|o.4 sz+ݘrNP2깳*Ke-&hG-`8Yb촛sŕT;#Q Kǰ9XzFZN=4yI06S`%|n]?x;;盋$rL Nؘ6ZSqb"VL㩑;As@M޻ؚ@"~7-盋%rM{kky6Htw9i^Ns(\UN࢞#v84*w Uko/?^&l)ev<HJ}G/:sOis Qͩ>mEՋDǫ GF ޫoޔ9XVuk;%]l;6;#_)'z "4ׯNY RCZۼ*{bC0!"X6Ul4[ֵ;n[zh[?]ڠ^p&J7 Q,?s3*|;{ۙOփ@jW8 ʔ9s4>.{C3N;=[TݳVvɅ뫪Iw)o>->϶Fw-f1rkMWcGG??<ߍ,ˋ_܍aDs[_@.D;iiEU܇w_0Yg E |7^1|WϞq&tvt,Rqi: рqH dK'c.NBZr32Ls(hJoD pBz sgpI(?? lj휤A!T9d0*p;*F>F ;|WVm\O¶a.|͏nwxY}> RMG_}ssnLE] xng%*?Mpv~gr&=}'` JK.Y&dp)H?>6Lx%dOwwHi&ֱū|<w/]wKE!ޔ4qxǚ[? Q+ x &TƽtOF<}M<Ǵ\Om%Ţ;sE-ϾGکY5_kY,tMFB5uonEEƤDw;X.bLqG{%???gM ѷ^(md9=Tj c ƅ(8,eLbA}ҸcJ.)ƔfOgGx07wR(2pW٬HËnbO0J% T-zDw"}s%g7gwv`xn )9 RU"j2m.kA"tic-\-}(:m6]%>R,s&9B]u_Ϗ9SSٝݖĈNT| ;}wT)ǘE)alQ TJm(* ܚP?IM# |/uӬEY5?lb~ڛ_1{O?uFů H`? _1 R.%zBii~%k :V-o,>mnorB5^94l` :0YkŚV?EU-vboVӿ;QU5PG?-q3m겉 ֡ XvSV@e^쏓ǘ,ZMvԍג>(V5)#KZeǩ^:puNѱh!?ӌt2ςa7J%ސBoRIJ"dh @DRLZ?ݧdN4\oN!xYW"!LNo/5 Jc{x"MUeyR&Cx0oAu[\c:3qBh:)`:˱i׈`)ݕ)7Dm3-m3tcQK2QX A LT㰜jB d، jB][g5} M[Adt+$#W^À%A}9Ƙ5V7OEKDV0%/bdݾo׶w(;ww7bzt1 _=$D$V\?40 =c~_k꣒;M{h"+ ҔOCx6_=S3QiΎ~h}ޔ|D9a)b2fK{)H'B)~pyƒ&ᠺJ;h0T;h)usNz ?8 |眵0h(H+w 1ڑL pC;NhLy +噶i@ L6T51_,dj?ث]yLfS G`w9{?(-uURWqK]շ gJCpD Qͅr:8 ˕`,&&rH͏/o Fᦺ(wяUGε'[n`3ͯ}m7э腬 æy{7{#3 ]x\DԆ`J"J{*q̄eX*Gz5\tÍ E)H B'`VJDŽA$DKµi _1yO䞘,)I[dAM܍AnʸXghJ@RCRD9b=25E1a3j`2HT#PǬk=MkpȩϿ~U󯢟U0l8 q'#mwD*4 A‚ry de;P<}u1'oW] IwL>n& ^Q ZK*p.Y(h/j(G1R RjOpF{-;h1|; ǙQY@zga:: ç!tY_!8AA=Pu0Xj`QѬA0qKaM\&9ђ)vdTQtaƨFpwQ hGl f<#>Z%lOWjvbNr"vqF8 FC8U?ߍa^+bw#wb1M7†D8VgN@G+HYVPp;2k%([ Ғ; KCy%-z Ji +gJ{Ƒ_1e1+7d}'H3hP$6k}(mYa'$Il5UTXu0Py\uUȈ`.wv,e65}ٕ9 F扒FNN8H.h9kB*; LcU2(e[:/FUQANI /C Z1 = &8m5*+0X]v @#f"5xY"b%he=LeP8Y1Z $zj*x]TC]Q HEMRA1|(D[ypNbKp/#83B&QJH2PLuBƶ<}b :V>eʁ$ 1Bf#FJgX剬G!  vBJ%&%1- !ՙ[I-4ʚLfDY~&~#Vfvd}s=B]:]ҭĭnL|ť3|Oa9QZrOW?ӟh'#JTq퟉3"嗽<bH Նk*"mp 9A G;ǍBjuc*dh;U :j$M G*$KY'FH78j!VG v18|:P T~G ':rZMElFXWsj3v~Jj\h 2VFJ H+vQ; cNcW)= pDy4I{L~XxYMِς==^W@gMOq> 7.:<[sʙzr<acGR|JPg +AF۾]ْ%=M$:w&_]69h RظߕcWM:Ϸ?8gY2 Ǿ.lu=æplmy|@CYDy>ZYPP[Є?Y+}iX@l03'O 1 @"2` /`f}w3`WUޞ0UlR_$)b*K2}̓yP8>r K/\jQE}h`~! ˔ Oӏ:Up>KS Vᆦƒtv.ⵠ@&4IJ@?2" v~?%BMx! R6紬S8n Ǚ.as>YRAZ@s'}Aӽr2 hM3GshT<$ vI1#_}TMT,HM0'}/t{.B5ETiة aTnk |ş]ݨ^]VuWGu׌ j5qDfI069OPAҋ$Fȋ81kM`lچ:^PK-pD*+kGmt5%L_<կ át>=ᲭlW;y˅jYy3A,7u[o77  x)qMߑQ_X/Ie.^)ufWjM_p8␧^Z .5}E=Yw~eֳ% U:EI)S͹޼ON)H)<6}Vaxff1/w߃>8pe |b_m_ *2HqFԹ&Ȅ ke(6<DKexJ=P'iu<0>p0}3c^yE O{V6.XЄpB$NNh.hLxMJC`jAf1Ee#R?dv ~ (wĬz[\Tk\PA@t/ 8C+ sy֓_Uy>_{F+l'kzEXNF[5$Y}\ƴ ./{W'D5M؋>ƼM^m0C暬ˊ DŐ{CZǷoוC'ŠGc0QǗ|gR*)ؐ$VWk?sw!HXv㎶"bcYMbU]lns]Fo5Xe%PK|"H{輝l ^ZfS0,~,I"HL&ݏD{|*.[7ßs_*7%4iG-'kqǟrV 8>uM |q1~JsiAⰿ!7ë\T?28u#f]Ҝ| B<НB@Vrݓc- J~.$c1C2tِxɅOsSHGy^s*` p)pAX8B8ZB5/=~Sßaq*ޟ8wmYm RJr#5+ ;A*8]olb;ݣ*g{ڝ#.{6#,۾Qyu&-&kYZ.[ Ƚuw=.EvZXKBXJn6q5 0c,Ԁ%J|+m9?ۢ\;+"3&rwހdNUlo=i+KلXCQ,{p)}uGp*Ȥ62ʊHkb$bʽN㤣|Xir>kzL җ+z H|ыݕ$X1Gć)^Bڳ'| U#s.=K1IϿ *W͒j_pK%X%Ƹbyj$C+Ac㾥L0mU_8'w,oxng6,܆۰pY_kFcɉLRktŖypC敖l[m*kU,;鋮 }ae۫n}Ʈ ڡ*jO.V?نଲUrmp6xv:wjGXU$؆ʭqqlIc("qǖ )Ivӄ8Ƥ*سZ\-3xKJ5WScm_X`&oGJ܉z4gfws|^l7bzq}uJjuud}zPp{.O}fT"YhI盧AzovJuޮlmPs=$AC׎F̉$#,!W룃6#o}W:f !˂3r`FR2y3Kc,s+j$Qgq8F$MQ(5$դn$l+>* vb4a MJ$̅Ư/3"̨៭(=[orGԶ/‹$f&̈2'7H‹ë41wKx͸ąXF =hSo*R<![iY/kSNʬ%V$OPت_mj!*zs4VpNeqm8\=NRdrHq5ˎ9ЭnD;09TtrMH4h{Ӊ w.fUѺD aM 9p"D*Xz4 Lj{"Vj hQYC)n@0Btu^o꧱_N%U(٥}? iD<8.;9$ Ήv' nr.Z(gY$f5ͳ^3p j+{dZJEgz#9_i̋gdzw$zsZ^&93n'Uǰ1]Y_Dƕn급ȹ4Jry, B}-M.& EM|J0M!h0Z3}vRsרALlb~y3+# %T# w/^~QE8z=JZyX**t>+د *f TPKQOs p{ŵ*`;^GAD|yZ!QyȉՅ]MUaPG;h TJR8UiN*NS0NiM>1o^3n{̓a&R/U[}|-k]CY܈zS;q2C4cȀ FWkfޞ 8r 68*#:zoS.7 Evb3B-򌷊g'MzqL9ۛ5-- P;b4iM}Kވ%zaOUc;MUmwLZBle GhЮ^Bu߭Fr,^vW]lyg | ̿> 楻lɾxK~dlW}{듫%хUP}Om *ޤyGR-MEGs ǂs{y3 e'j2:1D"zƲW#wu)$0Z6[G[ -Z?w1 zI+t(B(BnI%lAnf[B 6am#qОӝ;+LwVtM'Dt)A%Rut6J8nA2 d dMU{>]_5,H91}{>k{*UaY8)}7`s* +IF.E b@Z+,,!YJ©,!_FN> &+zUF8A̺Nr/wHYS?S`GCy0k2k nwZ?JbDr9.r5j綈[El,,3S՞%}榔,Ig%s}ۺ#RK£9.2`6%y nb8z&Йo[]»@ ͜!s&DOD:zO.ppH*YsacIKG} M;w}ݜk- xydJ3#JfJ2U#m,$7/f˝`A3Ùܡ6.PL)!l) R =4̞pƥ!kUJlmc*F@&73RNEʐ-d VȒbKʦWm@cɆ9E:C.z+Ng,M91H~ #gO9fOB$/eOFKRQb|itng*sE8%9DWKqL* =eVy$ I)zTEO@E%hlCһɖfuLSČLTbXaM %?i2F&uHA" \>L7,("I.4>s++#o 'B1Ɍ 1(8$bIU|ReOxJ6 f9v >W2\T- ^fLV6탾m 9^6f(TfUEڴjxLD#4legVtj&jǦ IǯŌkURhc/aƠU Vy aLl%=mW 1aww23gݑ͛wﶧq_[>饻{p!}BY 9DBa%EFS1LP4mҟ.Օ^n$\1R;8@_)4[7>LO?&oC[{΀elH=EoFN"N)SS2!>dusVFٱ4cȧ<\T_⬊;1~b/]r$*a1vAt/%̞lt͒UyHQ졞7j}R5G-y0J͵d30m;gCš}RW- xBXm!7&y(7<Ȳ4r!!:QKBVk A9̣DSN{ 3f o EKl2(QٝyGoki]1_gtIQ}(ؑ9˲Ee3SiC-VGϢɉV,8IpQ~I"GZ̮ir23뻦2CLR`]S_f I^2W<ْ`LIY.;3ME,J^)=Ov#aĔ^z yí-PV ֢!'Ro$h3@T - źLT,`7 @ʖ3[EZd4g5g罹Ss@hj4gPV&INf3 (c̫.ؐ`txO7),+Q/^֒SU?l;p }9.>/6D>NmI}Q^)ʙmWDNο;o#/ub/ILywR6[ޥO!?GqVv^tХăcdMjӾ! %^8 A?N3#e)jLZAL9 L d,:(+^oPt%w킝_pW2YBj[jPnV2P#v36Lq~r)ɇnMp`pL2 !5j{<e+O>*E SPFLiqʙQ"0[@l5/-y^mL $`eǭՕ8|Ey01.2\9øX0a7HDQ|ܯ#3xvvL>uS^ fU7r&N0_z "x1{H=WŬE+F11U3RvVz(hν˩J0JaY‰>?l]<(mɰ@)Jdgzؑ_1X`[H$;:Jّ'KdVN&"b0*dUi DPmMZ ya1w'YV/^dYhj3? c%yΑĈF*=SʡAiC^v+O u:By NV(,)%8hU$SZ J[kH\6iA*y~-%[_ Du5~M-F@WZ&TzSr9z`ouc9h2"얲?!cCYM>~AO> THD931`:+!BP[ FU{XSQLG*( tPuZ:mJD4%|!P?/)I߭%U> S/1A+ bF wZy*^QxI.5F@A/֌A̶D8w4R W8GxpAs 2HDe88mt 43<ݤEGϕFd_.Xy]iH۞)-1IkHSjB0 I[p^riK%a(]Wq ]:/8'w}gxmzf76Z̷`2rAoЭ'8H>۝nOe73Be|AW*Hd<_pP|[4QTJm^Uwa &% U;Ƹse{(= 3 *]aaZZг} lT2Q~&#~CIٔ)guYط,ߒR@b2FI ABy%o \oIi!'I.? 'cr7ԡY_q8!>(g;jq*He4K@DUYtEo"Վ15/w ;D(iJ 8T 0جn CQT ){s5 ?צyO_Lm;2ze"ESFw`0 XD;$J)&Xh86y[P|w@v|M: 3Ir@q*عJ|p[qĶVI$'QAw'k\;`.k0NA~E~p9/fGA=6iYVW;(ZxΓ:n:ĕ,l9߮[ %ٟyx40 N֌:kZaHr5=&k)w\mȇ򿚤$~s ( h(u)HC@Œuc3#cj!D&d74~Njn A?-?ヮPptݩ /Iސ.,;5~.oǽa ~ 6 )y8D!§/~^N]f˶_[#Q{ё F9կgB43Dڟ!L6$Cyb!u-Ӕf,>BL nqʑr߬6Gm/ P0ͩQ6fA L.kc"h@9Tgt gͳX^Sr2 dH9"svW7Ud<3~z=˵]X<:_yQ2k拗'e ~f>Y\ #>_wz$[FRz}T2,P.kPIB~r=\0p$H2 aG %L%ݻybDE$Fr5~_?[2x6{7οdlWCg.Wc+`ʀ{(Ws3}dc׵"v"t_]m;°p#$ivխQg"Fh"^VFr-C2}UPB.&oJD9UCMWG{=Eл El(79,BRGWG>k:7߲OT֕?׿-5f>_N?liW?p[Py&P.J@'*L[VzɜpG[Ira,§U2ϙ**HWhPU4Y-_ywTڑ {%9SlljhQѥc_`@m=8ۧq;խ [vy `G~W5]yo3sz6`z%hj$0I ,#n,Zlo冤yzxz+uC};ޅqjl}v^5*tRj&?ͿVg0PYcu:ǷS` JEr<X̠} E\{?90Q=X҅{R崁d U0G'9KU5cK) FPVQ g >#j2#Y_ Q^1B ?6{C$ī  1 (d$F)똡$(Lz< ث`GZQGwJh[C6\"S QP:-D{7gdIبX.{(`LZް>94B‡aT4m0jQkI)iU{49%,ץP S_WkzuύT4{}]g_sX"̒J&&$1gOM96.?t^{g;Nsǿ\Cfήn77<|X\>?LHM_x,k-'iqe\siTWW7],eך2wyMayeOAӬ_|TB{^BEE iZkW-o&euÄ̫!Y Z42.Z0uTSFX*4yL$hNkРhamּPxN<ך˴ F72=2mEwF.WmU }BFW]/sx^7|@[҄j6%_@CBqN8$JsI4Q^=5h!u4uǾϗHzo0څXrt,;!=' $C**~ƩDls }V%45ou"I~ .enA #P22x0+!0ʃƧIcxlW0=[o' fl*J9%+U6:S%Y+P)c^6 .M޹쿖SxceVDlPƸsȞ@d lRw,>r{]H{-hl~Ih(߷֞)RK`%TSjS=L*eZܾ̺ZQoY;I_a+;\~A%RMAƐ{4eJS?ƙyZSj-oZvAckv8i`40H7؍I%kﯮ>{T.R(EEe,~ᵹ0Zq/p+2"%MxWc^?6-x((B@KgDksξ~Т6P + 1TTc܁h&*5uB~WoP%3P0bJk0n)ֆ~ݒ;`$(+gpymR~eI%<= r>{㗼Ϯc$ "EEQ. )r!hl){w8m2'''YShWPRBd~vEF3qyy%=d:fV~.0 KiX\ U/j}ib*TX@vl]yoȒ*B};$L2ag^FIZu"}wjJ);xCH"d.V!EҲ~E›O? Tǩ ΙM`pNE[A.5dBDՓW-I2\w[|@[~|Y8gV0$pXWJ$Bo/Ƹ!n h:5 $ltT(a ℀zP :JJ#m#<0ƍ,nj:a{B KϏ{|ӳQ4G;o7roQoK[\8G =*ggs٭Ȝ?yӀl|l<ܦcp83z<F5N粂ү1aGhyow6s%SZiw2l-[In Γ$B=>:!4D)Hwyt+M$Qn_pv*qYH& 1]p )> τ`,y+2`v"~}i HVjJ⻞uꡦ7JYUFoWŢh.j'%d{. rAyH#΃(2!<0R:k "ֈ7 2mzK5ƿg\VfV2-S#`&Q̕ cIjhkvaa(TwtlIdT LB"gQq&LznxZPiz5n}:`w [1ONL0vqS0IaqB %ҽ wUp S~A4{vlզ631d2iRPhb:{ny.Q@Z_tL D~&Gޅ: 2h` `&8T|]7ݝkȀֵwnW Y]NDysn} 401Y`+',XWppXbUcULĈSM+Bq̽BVP5wENiAՁ yo|MfR $x x#XHTFVb٤h:xE:fQK=.4xgfQJm<&>mN8Rg.+OSPx6Z=SNq:}ݯιD? 7y<Wl@RbҤ#jkey0޷¬ ɷYlE(!TL;a-lq`=-g-Ƹ5;V blֹv+Gd /MprO(R1'. .35n_dIAq[Ŷ:GoG'۶\~$fa<24`LŁV< X"cܪ $.3`zr=Y21V* v$$ Xby"p2Zp7Xvp{>* &-I.f9fԔ-ғW!vj񒦔nO.BhM BV 77jHh)1-^WDͧd** =MxXU-e2Ez8,H#8P׏c;}J+囊lֈQ/eOkE^`U+Qh >զ!0Wb=3xB} y(EeepP{[&*; O)BqAyˁ Sy$0}QEp.=Tng*&6Ip4%&D(BL@hE!YERe e6iF3ó;#-9;G!P-F&t\!؊ y$@h ;̝Dp0C1ĭ(\fk(ܺ ̲BzHA6R$PX K"ѕs1-+ -[|@btXr"RSThбs l5 T+t.V\i*r&5`Zp-tplk)ҫ>6B\ 4n` x/ 9b*(%?юi]{Ru%zSqjΏԧWJH[eDV9Me[El Y@IL v)LؽGIZxf@hi.VhQ&hTMp4F9S1巃F[Ĩr-#F5TeX+A"O yWd*0~=w$0RٷeVn٩% z:1cTaoXgb)*`? #*J3ysDar e+U܇k}5t[)+W=qmhM1:Nťn"9!NҖX -ZtP!f(*V$=,Aro@1f\["jC ֨|$M3l|Y.X#Y08ƍsݞ_[}h 󨐼id*CH{|O&`'[%Hs)[,"٢FKط_Ҩ埸=H!n)6J "A лcyfWEm k{G{$VհcUl\Ð 4 Vj`1VQ{%&Q:YVtjg5Q6DwyZ2YN}mo4GSR9BN֑$^I+$,Xa?O̮C JZ{%Z*VכNEozip[N]@ݻDfVT4b\͙#{;oi74s%V7ٿenD/wٿH0NXɘ2"262-+iwW! b+LXˆtePcLx<obr+U5V 1*lB2*zEB#ъ1L af hVJ<>GRpl1!2jav F!x2`zXp Z$A*(`)bKc\X+ "Ph *k:Px^k29p&Hy)TU٧PpSGR ?Cyj3s>| w0he š1/Z`:~Y]$KNyz9jDptx41㇁2 S5MHz$5Z4~ڿ*CT] {40!ul]Ck"v;4Q"Jri PXD&D(<=SbIYC4Hh:xyؽlX98>,r d`&|9t`e7pHEϰr*|dLG\NسY|̓|%_"[}| n_s_OJ,fԚ|\z=|>6H3v:>be_+~u ]HgJ{8_ 4rzb#WRaisM )CR"%ʚ!f'pF'/_4gеkU B]Ez靝we ~/Iw+2\ =iS&3Sdmw T/dc h+|M| PXb< %(>pV GC</ GJ.Q9PZp\ɖ~t-x5zߞ=Ak+o4Fk;a"/49-E\8b0 üpyD:M?>d|u_g|8|=۴T-i`x/QI a}#d9B#,#.; 8=Fh\(*y" dԃXʟu!_*hB0YΖM~3G+FK٫\2'um$3jBEs+i"Ea!ZWT!k?hVcxݗͣg#2F n:ZCl<޷z@y|TNb`!}WzSٛ3s Ow81˟Lv[^QN|o2akNؒQ)w/[Y$&xXЄx+U f <=ħA~4,GbL#PQFip S/DB1!5G!n7++PTqm( {3LflU9l'!+#4[t5YwSK{s,u c )G(UOHԚEXE.vUl,Ӄ%)"T{LSVł1 -TW גNc Zsċ?Zm;$wʵ7ΰ}@vI,{&*w̿E Ӫnx)tCli*kee!0ڮRYU鐼=;e("9iۊ%m-J@^ݸmlPB)VTre$VtU7BqiN+bY j)3J7Hvk=zo.$Ĩn}]J ?f7\5d5pv1:ڟ [i-4MQuМe;Ԏb~@*UE;RI.:M T2Q6I4Y;RlYT9PYCdϩs9PJ]Ѣ$"PYM T2Y*[ +AJ؏ʁ K+窥@Asrؼ@9N\I>pxIh EhO[^A5ZHz7Zmlr0pԡ3rr;RDuT>D[jjPzMڕ2dmV7I @@%|J."G*HY 1x\Hؼ@wf&7,R f O;3S%݌n`Zasr"]H8ΡQ`0L<ޛ % ` ;Dq&Ȃ<^C\iZoX͚- PR5ul #y`x`Bnv] ܅{^a{]>$=92[O4TI=>NKڰ"w鲲^hO3eK{õ@kextk*=(] ){|WۿyЃeA{kx9uowo|??AYogoU/OV-0W߽h.\uqDol}I8WR$J]]c+[ <|ckSmZ'oSH_ E:/la8\u.K7zaoN䴔=0|?3x$փ_rbSjxR{ъ}qP~?%)'gW/t-<@Bݭ/ډgkl xb׃y__a=>y`,5_N?wK`]:s,7hleٚ oš5w(V3kAJs1G`?jR:%guPgH+nN)X0e撡V ēPBąNO':s̅2\(sq٪OBUj߃>g.4keˤUϬ0!!H@+`A+xy3^w*!:q)a͸V|E51襑cBcoQΦ\S CxˍJbJ=Iޕ6rd_!F;券(@.OmaZ"$\9AjaQ)I*U)BYLܸZXqeiSufL]k+$> ^>>@XcmqXOܸ7X[ٛkflkӼ}1 {zQ7h[YZSxȾ{r)\ND2zgDxTY ^T"BTb+/$&̮STNz|שu*v]bשx1et{V6Gu^_mU9ݰ0D׾RS% )ϢU23C~{ֱ޺/QPhV^~wcvݾK-0K)ɜ) Yʤ5zҝ+'7ܬ 0>:Vҋ5eA] j`q8%r<__^L2d&^1/>/eybu4yݒo+k}x ?ynezHefYr%#:3u:ڱb|4~֒ |u(iGEl2{amaG/PWF.O^ߟM6+!nq{' )|m!ZO ՜ sM&V꧹"Tkb/Oӽ{aϮlKΤȉb G0|/oR]'q۽`V4uU>H@_M >62h3]׼4ɫLwOO%nj =`kXeVmes; 6?y[Z?LKVӓaY~ln V*?+jCd+%lEΞK"sZ:)0 O&\y- r)7w VJG6P\BGmݞ ܍Z~;U߆yv$sdcjIQ['ݟK\ǹY׽Tg@dۉjK5 QSA:vZy) 5Z-jNݛGNhm廗&Q@៿vK'yj ei^oͲSr6hw< b1QszKm'm+|WWOf?mNZsmp@B8}ۗ/Keq -vnIAvBv7/$'?&uz~1ӏоm1߾{r|w[OL&(\jvۄƇzLR }yr`5gwg= )&o/zEZp~M;^-m+25epYÎ^1`U'ٻãՍt% DwjIb#) 3ZAHP)Uxr b, DMYd#,6D 1zs>(u4› @k W7k}IhRȺRJ֫|V)a@N@Bk[1`T98J8P7T{k׉g񙍴Trcxq85w0f]6BJMKnk}5RͬK*P$wQZJ3lW탶A'M찺@$$WOeak6RAUԀnpZ: ps\Z;GƁ1[k#|Vr0k@/q ܅Xo ֺ8~ Icdd5pUoϢV b 4<mgCӠtľ3{yKHO$Y=Am ԰C9dz/2r@Ibpmo| Pjp]oQoϠxjau$BrQPkt"`Zdʆߖ`%A\lE3PK%Ko>H'Gbp>507W|̖;䥸grWC0s,0&Cy%Β4ɼ >J]¿IڭZzf\,scqݻX>T[=#RrL*w#_QcކJ&~`<9}p3&֊LsggV[2֕ RQڦ@HZG)hV%@:(lZxHQ\̛vUHRnm7pW{B.!ȫMlĒe%z? ZK PNj6V Z}ͽRIAҍ1&(oˤ@y"mnhƔcX[ N9B|ߡ *m)€ *n;۷At;, 9/(=|.CCF=:ȴ91~fmR]0@i%$#$a=_*Y9$aJ 8u k.AޢaFLm =!XM,s4akk(iOP|Ǽ\6) `-ȕ7_EmPiLm ق콎D~n/eB"n PivCktz4:%P}.(Ⱥ~yݞ5Q-0ҷ 6GXEfMQPօ4ev5yk(ĭ5"{Rvնi2W`ɐ'r"QdԃHet5/tvKGeDBxy7;^ѮګNg;:bgs*6eupd2ɥA3eS-(jJב^g@-&ڔ#W;F.@KlPUU D/m4,%QCr:AAX7k>Me*Z̝%w ~d#Zheh׀: K%Y"@428d8> -j NE&;} \(s^7^aSI A4/:I"uYm*@]TM\ȕ#JU(j>2] Z\N[ *ziMP0ߤ~OI.vyJ~ zH(}tro zw F uO+-X7 }'̭z{%EJj'Y(yl?W#2ݙxC{.( 3v rIUmϮlIUQL@f(|HIH%`KR0QUH.T86YYpUo.8zڦ꾍?-}b'-dmj䍿6ILdhb.!{v!5Apb_QQk;8]koQRXŴcZPQ<D믮JX)뫅7@A{F=ڄ02^4 5+4("[xvk-jliv{SWp~lϒg|̆asδ/Qr=X|B- I}چ2k.Hw0]'~풾RJ6h*O&䫵(]BF S1 Heu55T043Cb\ vS_E[n澖xҤWe7o"ċM\g2 k8n ܇rK\]]"{79.}EsJH "Rqfv /ht:Q%l ʷ};ivMGbl}$ڷT1'<6ی3r~wjaeǕu X*n2uB1FZYjTІZ(gxNweq$GPT1og0B^%QKdK}#Y:{|@# EddDdO0>&F {ɌA6A+,hymw.z01Ȭ lT32([`]FŔ .7H6>gsD97}'(Gu󞺢* ־,\HaZ.ی@ SŖqY:nWpiA7n}mќaI 5~7e\Kre,+TVOKY>ݭ!1|A.5_$}ɅrF tKA0m"߯\r޶E#>r&K}ΜGLTU<YuIѺ5oЁ.S_6rZ*Ԍ\rP趭TB2fX0k;0H9s1j9MD+Zs9J~\"=@s?θ6{ȴQ'^))n#ɔpyT91 .syl-=$Fh~8 פoF`RPa0&;qBYX :0'T5ߕAH K'#fvBX3ͧx F)va9}=RZX_m\ cԪ] UNT R$mYO0und+N'B`|:SI/)ۧR8TMHdx%9q={_tMzO 0Gt\lgcƷsZlJbd#d\Yk:ωd*,2iFzUnx%{5 R0϶kLm$alL!r+)VW_rػ%X B%tK䵙vb92(:mKr ;MU{Bdyq?}6]6tA66Hz\,&\V6FΝ57w3n-lN]FbS1mvTfMl4Ů ֬A4moBPRD;R˾ a'W߼sV8R29_agB&]K_2VEJ C@Z~U9WڹJ5 (ㅰJaYеܺ1@CcXki\Vİ/ *{Ma~ڙ>R#E /{2s ;~jlMۄU{-+^Љ]e8.vyn!-YmбARlm[ԲܴRZɼfp &7۫ۈѢ;~7&A3k*$׬Ls4ue5LnG3QCڙ}І'dJsujĻkb@w{l+]˻vl2ڠh#x8M0βPJ+oBH],+Ě<"kc>dR¤ LGI|]󽤆#Ro Trfz{hJM5CryH4&^Z)DZ]|S֯vu L}Wf[iYZ+w+meG,Jg8T6<1@GC%~s{G0Ҵ5<\*Xbu"Vɏ PfD.nY IVvnXc0իİŬ1= :>.{ԓu\2aM`K:6hged6ra#/\ᘳ 50O22T.@A>2@{Y.Su-ev;kCnHvxm.!p+YU&h,,j wD#, )guPDc#HY{")ǒ#Yk%.R FuFzɜ&Jo #lJ[#0̵XҠ"OOq0v*(jY %efWx%h%PX(R`DEIx")r\WM1W u 1]i:HVȗ[ͤHGtHsjҗSQY"m]ӠFѼŐKT>CS:-?8ȩ)A^y5Ccl\.;{CO9ά$_|2Oi[HFD$搮L'K4DپYss_++h}k6hq2w;=H7q<:b~mqG>G"K6jr]q9H~!@L7fKl]Tw}ȵ1I!-x݉Pa$t3ݢ?\BCͲMaLMRpwssw"z&yT?KKo/:E>MzR }_V.W凋/Mn"=^^ݽ#Y;y2[U)qtZ1+ mH.)OOERw"gcӴ]yw_ƙ6+Qu#\쪶FCUss /5'=8a]ЃDׇQǫ. $.K5T׺MrG)~XŪ.Jީ.g87ӱY˪ ̶2Ϥq3ƍJ\As8+~UE9̆oq܂n!/nbYy-r/zD0r,{ҞbaGr?q7v$ga+m↹fw!lnQ oBn(d|^g^^d9?T(̿ls}Nxj tYrN|>Oy9s%gKE.4x}o+&Zl7|3yƟWWwDgr΁,H)5BK SG m~G׏??GZ|?Ӆpxe,J/zN㠷Vj bE8Xu,*I4TK "ٱDF~ၢght:L4#FϚTtBrɮPx2U#HPժ-Osg `)'j mc&YsTʳ*磪7)_lYN7hpb!pJ0pW᝿/qlc8x錤[K2N4dE@[kRK`ev]gl:|`{qs0}3L5֩v>(Er69'i x6q[xTX[JqD<8ݮzx||epCQznYhuWKc!z龮bNCy`B3>*p^T/tkw*>6_>\ U| TC"-*kU:"ڊL$/zT#_xRK-0J2#vO?NK-*D+ ƒxi}Fr SdVS}k-s6:7^H_\wpvb@ h\"߬|:FmD_^ū'a#Ĕ){Nno4ex]_s'sp8ҧ}Gtc~6|Xلw]W݇":w1<؆ j"rqᄏ]{E+_~(i2;/K C͎*tts`%>M'wCkl;F*;rZrC_ &R=NbܷmK˕[= c\XMbYPP^43 s,8R(O!L T66iStԩjB|Qٿ7k6DFs콩&̸oH}X~b ‡[J܁ * _=">l^8 VpfOLO'ŎA7ONd_?U9:n'bi0D/3p3$D"Ԟg/>ʏv`ن9Z3dGBX~Ly"'~#KyegM̱WZM`"/ݫSځ?VKx :rxhG+=^sm1Cz-QcFq.ae;|qDG+sتa z#5(/?chr&o%(c#יq<@.2<> mԟj!Vz!if$|%E%$t`xdAWkej<&R͝߅2^}] zyȴV(&+(4埪TXOʔ* fLs9Z39v!㛩Fk?d,쵁kx_>0.$qq lhex D.p&lo@43DZ f˾ o1ٌ/(m7Kͺsfu vhـY0|el7nۂQ~&5F~[6m\8Fwm_CziM6hNm$i)>N(9qgԃ%H7xI/ZLB:p:aDgfXA. y)w19KF|,f\ia)R!ﱀЗ (1HC:AkȄPFiٴig6fqNqAلv b r+ԕ< %^hB .F6jCa{5ZjԨAC]ZP3qiкeF-*sHS;ع2V^vNԼռܓ6n#֚͌.xarCaKdYO\=ArBPJb*$^lakBdΙdOСx>o %v1cn` 8L`HHYJezCU!Lj;H}2i%[H":\ofdΚ2WAX&dl<ގE9͆cQPG3iѱ(HCʤ;Fh1@)Q|B.]cwp8/:X2A/I%̅[Y5/Gϥ݉:̓@C1~AlN41SCRPT\"Vm"ˉYPtOu͠ &8xՄ(פ!N;= Pe.iMiG8XD(W 8{Q^cm}pFꐹgWW&e!vr޿PrL8>XqK3GqL98'T Z2. `sW0oʊ)VL12me׼{Vkumҧ9 qcrS!&Z^fy86/0:HLv7%#+6)+peu:{/ 5e\ 1U. )VsZLlePK2MjQa[2T aʖ$[ZUX:_Ad&cU1'g c >1+!}KCZJwuhB(7gXdg\~S"Dj/4iAiAfzfa%_X/Yy4FpaI2?DKP+NDZ:K;K;*̠YLk- jѰElZشi-b3mӵ6-1K&4VQ L MDq+D+_̵vׂvZ B LڥM3tYYw86`*IcK"rYF3Ph!'2"'AO*5oұ ?G\ѰG+L6 Z/ȉY^^#ЋdӇfKs͂]B875FJnX ]ak;ܐ]`V(W41-L,E/wo-H(zU ukN?\7%3,9gO(0ϝ[)ޢ+.eq= 5{*G ]lcW< sFhqO6QS-u^ l~g:~7 md㌊s3VxR5[@ΔGsSO"F?r 2Y w+Y4|w",X{r8F +п/nځd/`.bt'_8qNmk&ՏFjZ +"sXB/,y6aE%LyZY^zB&cޗQv8:֔q|ٸJ9dRɃ3ӎFZ EXGxU||J{\wIm2c}ǧuq:Ig iY}YK8&"GHPZtV3*ގ&^E"WRr_%IYSm:G,C1Y~n \X][*喝DZG\w kw4-1̬d@()I (1bF;0{*ҧrqԞ|N+ {-gpo {v^.;>V'[VQHkTʋYtY@NZs {y*] ɔ~NƘS* Pu -h)qAZs)F<S})SLJ-Ƞ YpLT]F\OaNOiE*"Uf::]yVze>tx&!B \p\7ii6-~%/]iQl|Ψd6-: o@w"rYQp.xqșw(t\ĕ K9/ lҁ"`Ik6l'-ׅ-1tYyZ,z<]if$ϥwsZ9D8hoΥ }˜P!a:to :X܇,o]>f5ێg2G'qϥ(wxe;˖v-=Ԛsd.cq4QX{쑕gq3C<9++Ye]W?[TPv+WHaTkkYXWDkh6}ʂ0'LWQΊ/eBw=irdբ,O q-1BKue7A"}~Iͦno4߽>Aóbg>)0V8w+lq'VFGC =}BX` d$b_}ݬ@N>TY띵RB͈*qM`8sv$^v9 ֔q< Ѹ9 ]JIFe2F'j;ʤTԈF&DdH˧pq[U{I:)WpU%w)au*'~rXyX$mWzPZ۸ v3r8z~$zlr  }R '!S\.eAiR*H )mٍ]= raj,h-$.Xaϻ̾,VGwᓂ_qnW]iڣq}w:n3] _~=l]o'P:[A0>G}iqqWZגuzw\ys|q돯j{~g$л׿aVqkW9ԣW$Yyf $5uؓ{<-Ztcy`o93iYAﺳ&C=ZQ}n>GzN@2x3?wߟc`H!ٟ߿ɌsF糟.P-k_.vX?z15mtwvT#=x-hz]aox7} S ^OP[,&a7asM;41̟7ܽz>_۔$ޠүxڇh"cwQ| wz|=GwA`ݴP_/_ ^ ~mz\<Ǽ.ߠ429k57i3~m42I}?Jy9oT~ǃˡfqQZb0cHIŰ}={7ZNdߗJ&9MMľ5񽾮RZ˔ I݁xKMgc>%W=/ oLY݋A/)A`!q/19^LmX2eZ<*.$(@;0/s _ ݘqQdeԢ۞o{j`G5 gFfxu g>G%p>@W _e!%8Ҙ;l7c 6>0_0C"18@z@x4F!F6Ĕmx`G#p  <4Q6sfb8Qǒ7gGmb-ݾPyd?Q&"n3o69ʕDXwFQCEpɴpX 2`o\.{d"ޗn4(/KcO=I.0;Ha2qyv.h't* QP1MmU$XtJr jhj}R?_EtaXy[XF>|pQD+@9(,mDj˗QZލd9GP:N>t==~)GR'CȄ^HLjk~ .@h :v/qm]]fV_67W= m@}Mds Mn@o(?`)K~4y[ `+u(!,؄?7jwwvQ~FP`:^=? o"ySdRbf@`em.JYhBUXVb1h96(-( ݳx3_)0Ua8ol= 2FuT*μ1N;\tX \qE9jR38&ː0/KXgɗkXgBt3>2TnH7 fЊnVtݬfE7+yLtSnVtk )"p@QB8(B)GA4 ^87Z3} F>`̧,΅"t]ko7+mC2?M[-z5֖KΥE{8#ۺ:#X$43<|1]qu+iZfP4`nԤwϨ3wgVa,sɹQF[&@46ȥЃ\r\~n98Pbn'2ЋUh:?T<uRND2OW>뿤tEUSW߼ߩ>y*#q_>i?KWD@&B/Fm @֧ԗ(7%W7Oy}IL:Uya|SK~{l蚥ZQ:h1ס>фO?HۆQ}Laߟ%E~⤄|Am{{>oSSrMh[|~L;20o`29N Nuo?JϾooʁ9Ň3 &] ـB >TyI͒OUDe9TJUYƬ5!^& iLC=m,^m: ?ꊣؕ@CU?D0@А~жH6 RdI"%{* -}k #0RH::Jxr5>J8JK 2Z f -H_9r2:ŠDQX19GU - xtCy&?5AQ#>8ۙ83aV`$*k5dU9C}jWj]NtFaY"HAx8Fёz\"RDhHOģ>;'x#ώlL+sIJN@(ϞS5TeJ9*RrKN! U3HՎTmWT21C]&ˇŀVyFfeGzaM͋8%6Bg߃ LWDBوsi"ȇވw|ȇ|h|(m4>M~z4{+_1DT27UI$CFZ'M9’Gx?r9p/7Ǔz鴧il<s>]?0M~?q=6n_[M$wΪl^ 1Hm,C Qh-iSH: q1Np[Mu ?5>1ږ]rSQw*Vr*8)qPEQWE=F|p2wPfo$JL|ӔEK]կujS~vbw5! Su(O8QcZɮ8{Jo[ 5~5Bm{ZUlsc]΍psgM zWm ,.Nݶs!|wYm˶617"rx7xQLB-ir|͋$"|̠9>s'~֗VC\ּR)!TwKGw9_l>?wGW@,ŜbFoB )ፃ`Tsp:mD)91;r;GKJe+?z7̷A-!x嵕G$]E\D,%)sLy9GPPoȡsī51_aJ rQ$#0j0 ) !Y_6H ӌ{H}O4ݬ9 %fwhМud%:/hqlvgm*fql'O}P4͍'sϽfzo$ o/ʏZ,^=wURDTXy)M_3PCuΩLTE85yN-j#@Y%.+$" J #6ѱUL"c D2lBԧѝϚ)^&<]NvYzGGO1ջN[K"%6Mi&Ӻ)Ι?u]frv^CeIEw|TIK-J޿S@vQzۻ͢_K#^5GW 4]N转[ac]9f6dcr/ߎz2W*{S_秉 wy5KrO&R^zvO#~OGŌO-n\Sgbsb$nս$f4\^\\?ٯLYqySeit&50܆ _N/oO{<::P6zTwXV-`]C (;#Slkxʞz2C+G^Bޛ硵V C.Ç!'FOiRx7*SuޛuTث%b+13s*ɀ^VRݒV7#c 6<5;mDgG9|fZ{&\],zʖ-*Ϣ-ܽw(RGʻ;WZ; wjjlpCںoUg  ֮ۨucmCȞВ4N<;ȥ1`G"pUAhcRVD %j)| ֆ!2$s%uZ]ע-is;FZ#f8R 9!e?9 }4NU6!'bKE6N^->KINy{.vVe*UYf;<MkY,CPk fAZ VCj:Nj<-HL&di;BpK\ nZx܎U'2NVef6Ȳhr!&؀IfBVz/yaZ`/㋩TkſG+&XwR+PLØ0AsD yfFyr\.g؋gwEie3ٸ|wCX8{M=T6)hYy1e̽XB-KWwB8K[Oi.َdvBM3XojW%:~Y{w'v>ȹq8|z&iޟN)0+Lۦh8^i9[ڜnM7>y"0Kqmnx!^)$2Ofs^rDF)㔨, {z=)&\p |l-piqMUc+M<"M\SH?S,n;yuVYS4ni(=`$7+ m| E]܏V13s1۵ʟpXX ҃䧫qG?d\mco>􏵆|_/ziV[9L-9fot~[h>T峁]I8Uk@FX(e̡h\Ho{{WmUby5{0|94<#&:$Z 2Sb Lꘘ&mjT{匀 9d8v*#hyIN˃n=Sx CpQgAi@iP0ڰfMNIMNrnʁh"uxaGCʶwmYdF& } )z?\mC2JP!%ٸaB3+7A#u\)G"ԕʍ6A!`}d=azi_bv9 f6= ^4y ~nI6F~5@{GxZ9_+M#7klhem_!>7ߐSK)Vv1!V"#I%pcGwe$a-ۗ:( 8(n []G9.g0awM*?{WFJA/ӽ>elOch2mT=FxYD-Edddd #n4Myu,M6JX Jjލ  7i<Žx-bcf=%#NV%}WrX!鑞%_jP &\YOxu3AKcu}e&Ξ笮IhյF,lt5A"=v:kؽֳjrk)R;h; n͹k-v$v "E`5 %C哯0'̵_g~JbQT)j*, M_苇EMk o֬Ũ7e&RTٻ*^YB#xC{׏֕zx}M*9@x4Vşr7+(Ū7W&!ˑzo.`Fy'=i$틹l]0j wgQiZi|=\"Y !K]~`|1++QHQgc\VdRZx5gV?oF,o:ξ2kbhrmc ov;cт"k2#I[8uhܫe_?kH}z ?Xe{{C@؞n"D19"BsEdV":ETPպ'|9椧["V9BɋT@XJ:l%*F7..Sv</Tğ,HHd٧y3,& gIţ 3X*k qE+,Xс4Ҿ ,4yR?goTp;(KZoV>, e<7b6Տoӈ< M~>QsRvΜakp ~X)@2/k3Xoi/WGb?5h}[sӝ+@Еg.1xf—gFC݋ K/y' HiGǒ"c 8Veo02-̦}ӯX \Οm$lhPXtT(zyPW %5 9"^q9OЊD Ŏ?lm"Լ00@9S` \]aACҘ)cr-B!SP6QN]̕kõ`9c󰤤Zz 0/&ҹ  ʼZ0:ZtzlIZt6{#<&9ƲSU^Fd)^=.['0Z-{_0JXYdF82H B #jEVDida ($.p+` `zR|"jJu*ⵗ.骞6HNL\gbK0ŇQD,[# VSpqU'/7WE1Ed*Mc)3dBzUx75 CF!DqFFl !{\mFKy!AgX<22[Aȥ`M3jEZ"yk$*0 6wc<|kƫӉɣcYك_Z#Iy-7kM j,iya0'i洆L"]|ޜpRE.y) @YwW>y'CSU來"Acڰ-!D(kwF9(<O?cropoL&ׄP*<JYU~F+A%jnmКQ)OQ'7c.ܬiT 760e J &F/v>\&중L*mi՝Oϖ#j)c9g6 6f4#d%6WDo 9uUxX5ac^֌@p"Um+{$|%F2RE'}R3Knx|\IW)xBa,gIJ 'O~vnA$Ś;'2CH*=U=eBijP; |r嬗HLQɗGIvLj浻aKUEduo%R^Y/8E{z9NxKWo9]\X1/d n$W*Ҟ^Eӫ*XxB8-*2Ʊ )HsK9FDL[-gXB_fc0#ҴcY7cze /ӲlٯkyPab 4*O6|i,1++QHQgcW]ĂahrNiqd ~c.*)/[b|]^_e *e\tfget\b}$2vnL9sfjfх,\Bkɇatkiߓrbd*E?sF cYϲPN/05K"dUy WX*$Q?{Ë_ox>Seѿ2[:?iC܊T 7"uVjϸoAgw=hv.zD+)k9)ߪmiV;n: Ք!* $FIz>rX>3=B}nJu"ԏg2m?=R/1a1ʱM`TZiÓ 4*%b?1CpwoQgJz*!j{wUI0s/ ܛtXnZGoU)g|ӃHu _|gw"7\<*TWlV!ս[%k CC-F<{q?,ߵݨFÏ- lV80csnǽҼ-Icג. fLc73 Ф1(|}Dk2;p8XZG"#t2q5Pc 6;3Zؼ Xp +1f&㍻MT%?{x)t%aD{_Ct˒Ri5b@Q*D Izؔd(ܕscCG$橃ǩ>{`LgՖW'3@ S=ݻot^)wl AH(WZ S5*1v DqfVW{1tV;Z!h3e^ .rdiAeatgK z0ԁ1QP8(- -IJAe(Fc1FDMp.@KS!]X/XP$F1 Y4CK1 eᘘ4BU/bwmqH4`;zvY<;/ښȲVc'AO>Q>}X*ŪsUU-n%'9`q~1t,}PP@+(EeB0T!|qp4 4@ `cۖ^oRNqne<1N ͩ*nC+xA3~4E񀶌p,NZ"#D?1*TYYJ .Y`6[ۂXitv#l3q1GI9o"[sPc4σAMQA䇞|%@ak͟K˻[O $@yОO*}r|].7S4nw>? ̓Zc&E1^嗇Or7y2]Ȏuth^}<ܒvD_d~ٝkO] <.(0UNu@!Ynh*Aƺϻ2ڏmzOԺ !\Ew)}8?w.U^M_D֕ A $$ȱ ZNG5lBHjWARRd _nvxaf/;ARO/Pb ԧKARd Z!Q ])*43>XͮSM7j MW#'І`ߦoB2Bnĩt{b{īsGe1*=S&DźJݍ.Ydx}C~YEAU3v!8F/qY=<.`RT^}i.7THiR/X-i7źa>:Y/ĕ1 t|5UA-=сr.D:qA{Nݮ/zKF҃ O cYMᨤ&D/L"8jT:]T45I.=CXZM 'ҏ]RV"o˸H &!JJp'8RWC^\棽>5\{NC}-[;ݾ<(ɋ;FMYj„!4䙫h/`tIEPN6wI99sJ5xLf{O!\Ew)',9]c[>]78,E&Fj Hc |4iV_vKFZ89F2#'~_.*.daoC`R NH0/hgh3QS-)/+e7YA/+R* C=X($_dA.\?nӿ'?^D*f>xpΫ/%'l8w/>ȋ_g@<2P&_A&&'QK8*$YM9X*L2(krlOoI1eQNnm;Pn-tbQ6fn" !zcQ{+-Gxnw9 'ѭKKv2) }|/) tp:cM-[156*YFւ Rs2rSJ@rX='Gh)6OK[*M%C4ZVeFp[.OeP .ݵ/LU* uL>HERъ[BD(q\9,F)k$N8n>0sX܍h}"c3TK(}SF{ч]bւ½! rE7xMQxJ7H}9N8Xg+<}a'l['Jۻt/j"pn^Ozl6L |ǂW6<˽k;5t#'$Pݞx.zs3jUxh_W`}Ž,s!y]o3le?e7zi?bk{ٿ9q- np:jgq<h,/;l;v9Uhaq'ڢKFT{0s ,2HbWء*\qw;PH.rͩ" ~b+st3\2黛WvGvzI*kû哐FQ)'5Aٕ@ B3Gc.њtn^FډV\xPeG[U&Ȩ *"D2:p\OP5jp8I7J8rhB폕-%%宇eLG!Ƹ5!w `QnBeA 19"d0N.C702SpO#83O{l|@+i4sR k.V62{\C3J̉ *CMEɵ|H(%pb*"yA Ut$e*'I@(9n6Qѽɨ\#)7wC%,u;H١fi+ވBO`4рPyTE*Ѝ,vT>Q\}j<[A1*o4|$OHtwG8 ngި 67t|P5t# mHAz7Qr-6z&dS U=wh&{1l&99*gRsx eJEhcyKqg!ZIB7l(DZ*6H|e;*)o?NAs*ϓ’T[P<}PAt-ܕ%_te$11,A>rǢ8dB)qW$[x~ ,)1\v7 s se"a҄9%HNѳXwXsX?6 (Rx@LxUjְjkg P&vӉivrbq9@v촎B]fl%;s+UcN_P:x"Q}k*1@mE$w8 $$ťҊCTd$8 !Zz!\ UqN9(/Ɏz)wZR%qܾ,[y8 ƪQ . LjcD1j6LA3B)ʟqױ$HTY ۶ތƾU\<qpgLW1ID`WJ@>[ӸCR/3S٧}hƨ* *Av`۰FAks [2Ccɣ# 0"VFlۘ1=%C>mz[(8`b OE6-Pщ _60D+}Vtv}>/K:?N"-rZY뱔BgNHgI: 2_6*WSCƝ( AwFyKYp10?GW|uhjiz-Sԓ5h'1VFwVC8kWV !CH: Aߗ ˏ)QS&ۨΤD$=ƅ#={(nKExfިܭ`Ϗ|k  A)F {X!`I>H}`ŎJ"iԤJ=(-߆1i!jj}9n )06 s,<% -&o0lk=8l/oêK՞ȇhL:9_!$s|YQTLQZ 9dQPZԭDT'Iht]bӓ s=C!eH=}>Uy؇s?zyi-utV)ݔ*=lBZ &g#8mX@ ^WF#x7 {lܧݵBhUwB5ƸzmB V`Cs.rTtT"ېwZ4VY4ZI!8*K$ qSM4ӸC[c]`N֧:Us67n0n< p*&dd-|((G58\qlc1ئ]o`ԧDg 8bMJA?hI; n%: :ƛO2qg+?go߾y;kfT\xsf]Ff]|w:_Cg=z R9N8eB`؏O%"hsB[q92EPY/X.,+oA}37>q:.nߡX(2ŻbbxZOb!kӁr4R9y `c0V@i)RwmmJ͎ŵڪKRN9O)';߷AR҈"E`f8I%} tA߬B" w%GnCoFC,ɾƈ~X6G/_e7\%Tߗp0tUJ1ۀk!Fp"gFSF^s4ֱwX3JaMGLV[ևRdUedRue!)qbR\yƨ+B@.3`[e)t`E R9,r',(vdk=Z{ߙc_sweFW{kdr-{t9ÕjHAj Bks?DpONfٻ}8>r\}Yur0 Zi^*#! dQIJ=,1m AaTVT'GnCYN0|(0 F)> q:7N88WoVZPoq&΂V:@j8Z.4R]98%:,mPk,_jESz7tHSne;v8ڽ[rJE"J{IK +.Xnj%^3$!x $/? H+39gHr=* t^Y)喔Njk В*)+WAٺ@n وRL5ҵi0eZZ-%ɯ=ra%@@`m d`vRYeɐ({eU%(+%.%Lg 3"U*iav@` {ddKKTw!N>χғ׮%-<ٖ y=upYt̺62/$aDK<&nƂ(3`Fh8Y)=ݞJM\3$*郃š8B)q{;Jr7-VGzy[̩?DŽZ :OU ,/Uï~{( تoo` Lgllr6h.!fo;gf6i5uRs7hۣ0|h=|\*%0wƏrԐ#)]Qj(GhY*jHDI$So" دD´RXaKL( {ʞFu$}AEI(ޕVbڞy`51fK5#@؁L·1Xհ&'Z`6kX-4˅9\kIl׮ $NdڑP3u=]uPԜsfdܯ)ֹ>vI+LwdMn͔{cN%v^9׽IC)@`̔V'G n_hiU٤½G CZ--xYw27wh5XInߩNV߽nQ0Ѥ=VJ:&S:M43[Z)wrּD:{ȝ]~2 %%EgK$߾ O)ӶWfZH)Ӷ%$lF vo' )LGa# 6iV&4qԟ\K6×{ѿ)ǻcҴhs1= Ӣtss7gP,0jC@. X:@ A=B[QIBD h?I΋ά{4:Dn{uvPSh+4 tnRJҭ{kgJ<6V1;ܪ]DüݑEv= -8L]Ny}vS]hߴl@|٣˒ FG ,/=Įsì.hZNgb2G8e/>R /$DAtkXUe v/\/sLn ug@'䉧k׉Lj1M{C,զp SޱPAH+!iԁ#&{;fF$1hewҽQbwLq!YwPI ++)JX"|o}H n2[mzCF^vΦ-wJ X~58MRnT4Ķ=.:&K :&K*qںTQE:goeD7[w#n-0SQ7WǓ{gqs7:(l?kFL[y\+;k?rc!؞J݆ӷ}S(1"jg-4VCjw4ݕ蟨OJ#I<X`T|B3UΨ+]O2Hz5oR3 /gjXNA{K\3Ok~N)⣥̼C]lڏ]t½?ZEl^Nk޳ {zXZ{h N]"P>8IϝbAΏ;ˁnw z=mo)/ri'| E܍OQJ{˓xQwsns7gp\MКާk=(&Bh]S'P`~>ù cXȋȬz.Z(VwP'{tH3tT>]ઇZ Y6[!C1sJ$|.-v/0h1=UAHx# Aք rNn_.gp!Y1fBc9(X xg"3o.W՗HVGpSdn$$L^hMcq( j3}}6)_~N惧D TI:,o89ƟGgo)α@*Hl˛" 4žhU"K)Beu$A 0TU?>(vz ry=w:N?|xaT{q*7Q.׳d6z7޽z ]|x]Rٖ> U߬ zo}c^]ͅ1r_ss3g?g^s3M1c=9s~?E*`x맏l"ncx%nV t_2x]_ /Ռ= eEIQZ>(_jFў"Aڤfbٮ=.t&Q{*[+-@g3JL4/U/w3JTB7Qngkd,T9,j0ơ*0*? ӳ`g-m`hm0R-j0ʬR-` e+ǰZ]em =[[QF՛ەk m,IyqˣG')K갗^?Ԇ?+ðb-z?sǤ|yp|r4B/ˏrJ_̊cʤ$xW7N·wq&dh N⬝LFc< F ?}O!v7x?=2ϛY 4~D~φ&p|_:;yʖ61\jJw-UXm$]>C#S +R Rt45chg1MU F0o:̇I5c aO]b2%}\ʼn]8F={ӻǰTvο5Ǒ+"p'*8_-\*g7CqGSG A""+n0V^W`pɭioq#1OU{gw#egw/?;Ab,Qx(\pAT{S2)׻Wk[W /`XHI ʾ}@\y.ԓ L Pd'vtxջQ\Lf_rs]a/BapcJUNF 䰐3\XcHPéaC5>@OI;I;K-@^XسNo/?AYԞr15e9 OAYo5W+'gaBgGq;]!'5.q!caժJ<d`& K>;&SAhH=w ʭ 'g_fntE -z:xv6Ǣ|o VQQQUi\+eA)e!&,0JcN`A.YJJɜca8 ߠoFBu^Brz3|̀ӯ_jhs4+5M\vEY *ls uf=")ѥxdNcC|DRY@ES1=aԂ@sݢAk6?`x#Y){͝9b*Nj>&xOA 0A!ܣ9)Kc$ "`tr+ #[ ` *f\_s~ϧ&B_rkr2ɪ]/Ă? ZK⛪m#|N ;6?osADQ7 ~@g鷷G-Lgz}vGWXJXy=:?SΎ5U -?m7D>#ik1a0%fG/.8G-1,*/h-OMr1Eօ' ?&ZP3|8fxP3:BOc3cLYCA׾a5}LfAa#ZԊ0βi 1/0'Zp!;j[6$va}odlGXN@Q\BY0RgtMɻ/f-0T TcԸяVQs9Jf;Τ-N1BX(ѱK-uQipAjPH}qo"kt.M}g׼9uXBɮ=мE'.\TxG#B w)Bgftiz `җVR+`l'd7;nB*R=5Mb%膡M 0ucbC_.g"FMc֗C cN (Q1ߍHK~gf]/ =^û8[.ӟ=_9DR"L1Z=z\-k}Be\Je4沼Ϊol9 8/X8/qc!Ybgz3bKB[^ ]A.rVZG2mC -Z3Qrռ2y.uF]_W=g7g7iVOcW.ŎK?ݐ3|Y v> 4>4v))n{ 6A5!Zc5Y#sefge[2F$:8MS.*{EeO)].i |\\*+1ToA& *Z`lsK C, H_Qۂɚ%8t͞L VFɟBQd+rsš u`^\4BgꈹK$Yv2@^ue4%;,dLFKЫ2D8̚=20MӉL `A{Nm{&o:`tۀqh0FKc/K50\6+Ŕ\ Jn*WSw.h+ΟL?6~ښјe.oM IXvЋ \vq\qS 4rYE6ihq@F n'=,?9'aP)9\rwPjQ;,ss~nРaslSE_mksO𮼩ǡ`s`U6:y[epT:{kmkf( P0 ,KbQa4.g> ejN N##zUT%Ifi+cP7eWKk{BQfK3V8--4+(9{ {I$.^#C¹OE2SvM+PQCtt+w+k$rꉸ[ l'foFE1993%~R dY)#4])s=_8+6K$Bvo9GButF{ҌL!WX'8 )jSXT&sEak%̚UĆTrց,Z:f5:FM {HKU U𠐼\C +[b :%X#Fd_n&i %Dױ}L|Nm4jm "vBQ^$ 6Cm&J?XK8@[i@1t:ǁ_#xzR>zqswL}7h#hhv{l%oJbR޳wIV(ǐB0Ba&G cg8;O5ctZebiw*r+f{&EnWw,7P6iiN^Ǣ,CNi4vIվsՑXCzhvbJkos 7T/]hu(̿Y vD1|*&Cj6VUn*u*it:d 7ͧ)7a`OѡQR;6)* ҈n6&: {@rz^v[GG=] :J&rBFc0'W+%hZ 1h3%_ϻFGwBeT6E! MPbPNy]P*91G06VAWD!iNԘ]`!I}HoΕAuXx/D9a(0 N⊏Nhs ‹wREuU4he0Z}nib-vËucoNRB@eضB{H9"],s@]e__Åg&Z{.h_]CԴKPWZ0meуM*A&Y qO{ٲ3g fr^ )<'jOk{HkF::R,4 Zy\$)4)pq"|ͅ2%n6Ae! p9)C9o 'Yb].͜b'$`*m;)p "'k<;vMMA A,`lbյ1J +'NMg怣bi3ͮ||ƙZ7GM[|3NҺ]^}#ZwB()iؠ m+W֌6GL< f1,N8R(Ag+m]2R+4*LUczw{>tfcUg75x)࿦G=8^to* EASBg ̴w8sAHI bZ6twÐi8M;*NG6;8WvZj_K@M=җof7hg.ﴄ1lw`Jo+Sp?2,O&-Z8ŏA3E rFF=-8o^5sZXS2pNf3pNfp=ս{v͆|!v͍73'ƴ3dFꭅF^NI&vCd$2p2sCtxslis|]_!BzݣOy!]ۛY O2H^4ueÑ!<>gdit#N9)f/;cS싣&ogXiZg i2efnӭa+S ւDF/.=nRɚ1S9(9@8Lhg0,7JY2zVmu7]%5RIG>VeG?k,pd:w?ʩ۟ 6~e>N* zeN :d3%bo/5=;4Z8XՖS2&{tS0c8Gі=3B7?gu?w N=(5 ֌{'@kzp ]b c{n2kM큑]D,8=DGW P:,mN]ZTnSRlL $ig 6X L,g2GPb]c;}xΪ SE/BpE!UQxSH1(dZ@: s([5$A-MqȀRB/1dIQ E5P$mLZA AI@xKK6$"ΛKrP$0I*-C}z/aK5Pe`J#!XoBFTRē:l"i7!y(+*F Mh@BD<_}QXMB sZgZR^"*>D=RD4L8J֘!d%Y'LD=<_Ev*-suM;ش?ϙY٘swosgFUw/4皐8T_M򦨝af#%{5|a=u)ͼGa.'J\[>kŰ^Ze7}+8aA-[_D+v~Hr?h] dO__c,SCwl}·WlkۻǗE}qQ_z\t_zfVpЮkrC53bYX&lEsh_nmx]^sđBEcZ̯ pz q,w*}uh[й#|`|v9>8p14y'_1ڬby5Nv:ᗿ,ubX纊uX纊uz.wn?{QN+>]5ʽZ/Wq*\ v̕ ۇTicS}/U/ӖRp9[@ \޹KG0̬]کy:`/Rbɏ3SY۬Qm9 ڭǧ zۀ? =8e5};~ĉ*q)^p`Sn=MvOO>ex:xi <`fMv :0[ʩ+nnZ {mZGM&o_x2Z>۱C< y:c uz:8K_G#޷β'콪tp K|cE*ɉ)#ITz"۫ݭn'T:ըr2XSZ9k',{'<'5ωrv -X+&nY̵43ufab&a୧0!mad9pqu:M,? ~> aZI`+t8 J¢;AuW>Ns-ӆ]iCJ}i_J$6U` 1`:ONn ID4FWfj&Ӛ&Sy\>hբWpgBCT"a.jƝ.?x:|\m6ōIgNj;Du_?9@ФtG` <$ZI$~3,g0HC0T Ie>S:Yzm<*냲s>&8 VCRÃ]WlubJnP[?A3E3ˌǽ_/q)/e%ƅ:)"s+$ DI7^ d4ÚW? $Ug8G#v%5Su 8{F)yefH} g* !0eEEIq I^ 0cr"jm$[N6b'ÉǦz\9ceCJoP- cW$wZD$z80q'Gg%ypP/8FÑLL(G^B+݈zD<-+y`w^%+q) lEK@@9dj+a޴v#N{& %4BPL@Q`[ n#H=cW21H \nֶ@^ "I(Pkt(F[ s4tP5w<-QTVȉR؈$q$d+ZI0$ BSCSeJQ!e0M:2D&Rhr&+M' rւԶ9T3rSr.6)ƽS.1 lu+3qHD\0QᢄC<BA"I#YK=S x )g0,UZC + K]/k@9=QQVTֶoj}fLY VTlz;Zb1vWwŹ4sK80z7t׊χ!K¾qASxr.tQx0>a8_ԩiC076edRq J s+K)Sâ%GvG'7g-&h#z\ RL'M/)'ylAS['7,͘!oz7MuRczL L' rMnXȟD[j3ZܝW~NDszLtvwR6lѻ3ۋի/brb3H4XX|m_B`;)yN>Lj3_׍]zm#sVNw+`~mȕ [Hxn4ݯRDv+$uiLBΚzq?b 10)|]K,I1VyavhBŌimQ@C28JJQYK b,8<N 'DCI@ R^x2l)\ Y'}$e6 $0{մR~f'K9Ty#OJ3j<)iͷyRA^ k״C$Z"6 \+dj:ʜvpu9[=Z u\n]IfX\@FeApH Ծr <4Ѿ".9dup~{Y$Db@p yN@N9-)_<h`Y~(~M*RN{y3j5υ\bV~>Eٶ}ǔbŞw!N*L;RaRSVBT*Vl4~  th̲.uw7 (Cz8P$ktzṻGШ&TxWHFtvweD ޾Be^Rj!1sk{hD򈐴^,:/s ^+8aeWH^];gբ^Ǭ^ٽt8C9&1yEDDC˒SCKodtLKJ'”XS `8|W!s$~9ƙl鲄'tI)BTNh)%e%YʉkD%#@\_P"m JʐcaSh2bj2#\?ua;%*13naE2y%z!KeIhևdKe"FhNk;H?%T!iCvVS) 8KTU[<7#Cc҆QyB=˲5!̧5aZRXr8V9.m=LoySޢXDmJ^8%}oM8:g\)qMYW&1Of'Td"J"jsaQ$.$d&Η8\os}ΕJhQiB-~r.2M#r%Tzd#*d{C# MoWw]Z;, T@}5Ml9yn50\n^kXUGUVtc",ŻobV ,>Gv/?678w&- 3Jy?;(*ea˻Y^"IէJG¾;{`mh;!{I5^;H[9*( q[%$Pr]slW>oK8i p'|i;- @op Ѽ|i O×@x!"kd|pOfkȄpŃnRŏ[]橋L` dO0c-%|{&|TN+}66CdU@:EgnX>O?mdآP]EKfz<_̾L:SMmja[%ƅ݁J^ߨ{ZHP%YgCm$LyZQY0~"j}c(b9cgN~p,GjmQ@- oхI#CES5r|Vm|+W %8@1r}r.Q~㕫OC1T^IXTQd-pz›jq%NPײQ]"d^~K_ԂԵK0D?ŏw١oI3I_B`By=-}mݍ-܇v<5oo+]čViIb9l8#k{S_P0ƏVݪl{f/n-\3}ƂYԛ݂i +,e,(᧍3 tVc,T 20e̔q$>Jo 冔ooLዅCq: Cǯ<~M!y*;_mhHQc F*TY"@8dQk缔XJB eԂDq^"'J`̂\{t8Òͧ0l[ҐP7~X 7{ -crFXDEeZOd:+ ,Оhzp.MB6`u*؝fo\?;biB41CˆG{'ATj5. OAU*TW=wc"ZТ BYTLd/***L-O#8\3ʡvOK"vIY9<ӹ*3PH,GeiLiՉB8 w:ınb2 19Ą;7V$ Tfdiֈ (LeidT˜H)Y"˔f"גqu>Wo1tյiN1T\?_+)[7D_9`QHΥ-j6f`NWt IJXwiݫZvesvQ"II)hcFWX;wd^dIjgN41y(WĐKWiQ|h(9V=-*ΫdV #dM!Cj @؊R 1ˈ}5lH2fhbY0sY 4*N5ͼ,KI Դr-$yz6v*RҌU&Zܺq}#:!kƬն{B޼nRC+nUҫE+;m#Y#ڛQmx T,ˬ,e$UXR X"/ e&3&2R3FrnrNhF?l>/% i|BxX>mQ;U^S s#-hN|{=œ)_iO?ӘygS5 f! 3!R;A!h>Ο@\hqȊ 6 ~l*৺mPonD.y)RʌeVTI2֙R s]p")Fy57iHE=fC=n7GZs|)[=}F㵜1,WrE!(aы¢Q,i\g)`5t:_:7t7i??U:/|xzyh뿲vr5tߵ4{_Mzt$kJg̞r0xcV5*7黁fȅOQ9%ƯuPb2u|݆gҟv-Q!>EtSgs4Hin֋20p2@U e&S.`XõvetBϾ"SodRJp4TL t`:Lb}-rYV#k86TPnhDzOk^ a` "ᾗAq^5@Dq(בvmEpRAPE)x3엤u)S4-j\ !K4&Yd"(zjF⌢fK!XJ*U(b(եHJ\)A3SH;q3-fxQ38PɃhPV fzh9fl5EdJOѺ4¬>T';}u]~=[l%@oY?o-N@1fU"tniu{g÷iMc?]X_L~a8FX`$6~ [}hD6p a)5: ;H{<[ׄXh 9x^UvK4L&VJ%IjB fM 'Hq;J4R:4/A jő h:zQgF(FGw%nXȁ81i]PbڭP?EFQYZߞw󾌊O+LmY돫n0ovr;rlrWZn%O~S$Ő՟]cýCRRκ]vGD-Y<"Zt 8-G@98vd#@J(:wQͮ?.~4FǴEׯa5;!w Dx {$q. Ĕ@v1fTO>Y! hClO܅)LhSB[ZAV- 6?b7Y:L$qdcʖ,j#;q2U'I/KMCQHY9O) _'8#(%E>@4,n_WhP'<70,i9۵*,ʂ0]C}L7^*sO781=^E^_d*H9E+ubJ\iDzHXQux|YNZ.ikR.>2&& (V$iIU0j5MBqy]fb\Qc.as»-ZcT>ci`=V.o<\l:$wA1?|ї8 H` *w=<XhcD[',>Dij9"߁|,]de{ξ7^Yb1l|NS0f+iʩG ڭeK#4cLziN)Hu`TggT0.^A0k1J]ezwesV7Ou/. 5t,$) ~ԒhvjH輫+=s]Q:pgRA)sh> =u=]ף~!q|([På,~wN{pS܅`߳-ڎHhR#bjwڅ_??IԻވ$,l㞄9j,'Wb"߽:Xmsm3 [9xֻRB)ԑ3J+Z2:k!h. *a9klxf9MLXRSTLK(CHtxo<nW5_ёE='il^9ٯ(8Ky K LUտUZ  GDޮq2@#']*F.\qe Ré݁%Z bQwLhC;ӎ}l %iq&1q\ V ~E=tGÅ1&IJX"f Zc8X4jV٬v^[Uxo'8z} L5Ls$B#H5㡺'6X<xB?yLlk9R/PWb!EB?Snv2wF^]L_Ҟ/sd2Z9\GwV~Q_Kt4ƗSDz9x @ᛅ6aI+".DHw蚷R'!D&\ѴU!SW׼<;@`ض?+CSE4gHS4l}@3V 0 Ƃ$Lg,DWBFXq8g2p90%k~'ZƘ>Q}Z˘kh.֐c4H>Y*{\%<0۷N =\%KƼ L&IqO^[4!EW@]P5ȃ83cl8+fvW;°1'xT>? mm௃9LH*)3!ΐA/;V]ī'ѫf[ߕ @$P]0en+vzxTui gCvEin*%Fj!)g%|?~O/9B sX%{skvwW搗BDŽ`i+mJv7T!@lO&!1:MN$RCRr &橦)#6:~ջ+ knM:C#; Sh$KA}}tpc,2,r 5Ie3}^ }'| ^S!3I͜Eն0| 8kg[)S,T`P9!-}K @\fXsni(gyvP^-TH1=֢ڵx rRES 'K ͙ im}Zr"64G#Ɛޛoa^-a 0f*[nak,[  v8'w~npDJcՕQ \L)kHqa@nו˦"4]G>&>$FB5{ [˽do_ͨS}!M2*߹5,]Kk P\Ԣ:H"> h- Mx 75IS{("|ߪE }|\>U]ݗt!xoDz^v#%W&9im/gLZ-Br,*KRa^Y/DZBK~.l.J}BRߗSNn̤_&Cwy:h: Bcކ 5:j/uCN) X6UG3Re2Ev! ݺqWq"/"/滓COE5"aZ)-'`A(#0Ɇ^ $H1d#2 ƻu9Q+s̽I}eΌί|y8׈|N)=J)fJ˃n>#b)a7qM16y,6dn Kmz„:B;qںⱭAaL*)sFן4abTJ؅qZ_]d.57(߻͛EQ<$Zt3%GGyMSRq> /oRNW`/I bK.5NY+Ƅ68&CJAEs1*lE`l)s)r1M Z a(&HpW%8 4ֆQZ ERF В /[Dk2XDM3朷<T6JȊ:㽴 t\"OFXk,'БF\5KskEr)倣&<`:y븋/=;,H-Q & .:KLlWnve JXjŚ;((Ak3]X JZLf_L\oʠM%@Yaq4U߅܆$D d/9|6ڃg(%]XYcc50+ EKA5& 6K.ea"ܳ4 /_8)`>fn"]d&!t;JAPIg6J^;4KB%f1LoH⛷7MW4azoq[mx0VC5N>&5\n\n5ׯ*i0dP8"0@)+:A(0C8[-4k 5XM;, )vB Z|x>yqwMwqWRR+LN3`G[@Z2IX "j|ܴF|ܴG10b=a]tw^y05 egx`Gp+#5TEQ( +@Z`W]`ӧ [H@r\h0)KH= ΢p*d0%$}R\7MJY>͢ PbBB\q7^ɕI`e`h)Ԝ+ q,`q!,(F(G4š@C.Q$䤿W1=E, |J&MK͔$u,NR., (Fcs@8%hV@"d+NR d- 6)AʁtBA<cPVff/%x#/K zQOvDv2TpV+ḚRR/M*b\!i -7Td~>wa©AI+//sOgܴk{Sn%)Xco7:ϑqQ(~ ސgPzx2%۫+pV@"x!tQseƓLeuwS 4ucBL uvv}79Cݍ4ڍ`Z(VqUHY KѦŵWpfm!&k!Zr6l+,6)2|"y52_K7e7hҢ)Zd[IFre$ɃGBh"paQ(?Ky!q`+p0Hbaq ! O4NlAoV=M#Rk^ϟ;{q^gCk$3&iWRM$^ڻq L* fʝfyoD'Q pt8!"Vaia) =&81׊8DԳ }, Any|EnJqۇ~wzmŦ T&tXI߅]v>Gr I'ZՑJ,)Q4"L)\shGł^4, A8|éi´ZY@Lydߗ0ur;,Ŕ[k-7 W})C!W F򂓍IR؂L5{JFMM1NYRS8ic XN&38:82 CΛ\; Qy02AIܯH_R>rF8ڭܜm[봰v ORR9u`:GhJ5`0"5ء:WL\o&| nݹeV~! #A7!q\-4N嬬)F|xq~ *%EM&LE9gz[XKPriW3Lq1"k_H^cKeOJtq3]rM*NRUP-tU|>g"8T|wm|aĺopJg 3sYVMF!ގaU Yb@C$"wLE#̝p~m\&wQ[* !%gd4]d$M̱)fsȾŞ 3@Br jsvTK)cne#5ڝ[gTudHs;)G9מ̋+y%D n^-?{$ͣi!PCX( Eج\X *2*u {ȯl _ +),3ZThkZݧLpM0[у1Ps  OaQĎv//v3@RQ_U;G?t}Ur86dh>hJHqS1_::5|ZA9".T* ·Ts;0*]'>%1Y}Y>dYr|Gw_Xgh6QuMuSgYYYF}ʽ]br¨r(%ĕSQ*ֿOٸŬ"n Q*Ƅp!|ԓL/Blʖo_ǏRrE~Q9JuF8&T W}iam.LJל|Cz S䳯8fF~j,hSk0k8^g) }fݸ򂐄e:I["s o>O>bc餄'Iu/;5Cߏ`CrFpr{Ǒqs3 wzD8Dp3 3PRxJ8^%T8L1Eށ;gqnu98*tqؽz=_mpn͸G.;+o_^]r#ë(&y]z6a@Ͽ^_6?ϿWIƋ-)ۓǵEQEу=~=/?{Wܶ&"L>{lv2Mfl俿X(!q}|pDsoaF"ŝӺ<[.G}0Akge&-iiO/>qj!m.䰘t'.XzMϢ~4>;H'=' p'gu0S> r߲6i828Kƴhоw4e<9rALV6}zmeIdPx,%1m^d/HKsrG%)bK>Y$:<);\ǎt0 q/rdY|Rb 9z&$3{8xV2ˏdFWg?rLgOZ=T *di|7~IZE9=}`, u}rq5N_^e0sC69OWLtsHkMnZ^_K ~-ؾ~4n䢊 -q, 멫\&d|4TCg<}&6R(7C>7}}%O'J5龞|7q0~z>9_oySo t/wW LXG$fo^ӫpH4q rf.z PMPf^ %͓MU^ c/)#Nhtqޏ՘ ^\}K?'/9N_*j`Ф8K;:?}1m|Orȩ rENV$h97YI' 2_ְ24bVf%*%Q_50 tK@e]KmrJu&&c recNc5˲ј+U{ڴ@^&CN@^w+]Xɇ*#1*īWɯs^Uq|HJz&kghu$>K!m*u./F)nbs9hcpvQI_r#nѽYCJML.>hy;yӳr1D!-ϨdB.FC`eЪ Wfkҋeҏj#vuѽۋ|@1Cָ)UAĪE`3((lp5YZe A Tᆄ~m3 J: uy 1-+12JδD%*J ^&h^gtx:0 8?ExAXb.n+_UnW'w[S7?),Z^`RcOn_B/_Əڳ3+@%Rځ=KýdJrmjc/_< Ԭq,y@wېڴ\I͒/dSQtYt)'o{60oËQaH?Waޅ*02ZRqeufG(0s9붕0WUln?,˫hi "剹 ƟF(9LtpG*Y6yѲ[AaFH( `ewkllsd\2|i2Kxf)6Mʎeu],#V2ڢ؂$UY7PWϩ@ӛ8Psģΰ5VQ<9#FM;".z'CΈ3 aڎmaQDA! 8b =Zؑ.u@i<+ 4".Q*kOT_ϯE䢋4l ryE?k]IA%iZ"c>5-/K B("0F`Ps#0MTfCqȋ$5/Qg6/Cġ:#-$Ck{'6ul`I:!JU8ux /$;_^W 'n/%KWP;Ki-V/b`Ԣf%$ TyP4kZ40xbX0n ,j|[1@k_,lFo,L*6Ƅ-[j\ↄ r`8JѴ!aqC,n) Xa &h8,Z-wM'q oq|pb=,pMLP 00#<@ Y԰B}6Ds V3c1s!rPmJm1dS"f`0Vl9OɘG;1P-6lⲊ+1jGl-*]\xfz*QZDw51Ʌ.0%^ڪJi{$rӎr O& !]5#ivzQG_Of:/fz,f,:dnka&&Uݤm02l%;ɴɠBbU+-uz9-Y8p#TA}c+\)d5h-Q9皠(aiMל8(|0̗cU1Zh0MW}KdžW3,f٠jFw+)LYu$e-$>|AҒb%cN߲BK0 rVo]I\>ȳm L@<LJ(E?ߏyp3,I$w¦s5N\ ⾙LLo{J5n6IPϳkn{m~:[D{`9=CZ5, YO&n:o&nzɋL&B `ddl ;Dk 0WR`0^s6) βh:K^ZlWl&DXX"&bc:MWk$Ҭ {\-ӞD)`}ozo4ɂMa!Fu¡B1]J-xBP6ShW2m16ĭro2ȬNA.b (h8A3 mY(dJ`mmʹ^&Cno0+9Egn,D aT$%CF|]PTj{ac\pݣ-lUVV9a*2 maCik;+FtUT'lUT+UQuKY]dfzMSN{`Z8紼][ wI)zRT.I f꽒sjpS*gnJz>Ʊ2c&_ԳR{J&kI߃l 7nT;be4t\}&IV,./-a9/|(c"TY a?L-`|ڞy>Uˈ.Uk׉݈z0p0Cӳ/n(LzHHLB& ;Z,wUlV` #Y pzym} 4js-ɰ+wѠ:q|t4VEUG_ yx07fu7O._'~Nhv~vv|E[SoAk<^;9'o8|Ǐe\'ɯ(qГ糨΢D_$ '='N;_K"A9[f}=G820u;L~Wս+S/Ñs+EۈSuxq=ٗD [DzpO% #QȽ\OJ7N=Ig^rKɳo$ftE#fg΢0𞼞~4_J#$Cg<}&Jˆ1b؋5HQQh,]sn<8#M}~v_篞}./Zwo\Rn_apz DyUq#dBupp9HB& [xvS2T?>WN>)B.zG9dG>#$_^cG<]Fqh_9)%}Ǽs *)rlba iQ*cTfgq'ߏm$Or})vNV$䉐dλɆCo WT#òaJ b\UB0sLZ5w`eG[k/$ߚX]`œYbr0k)As=/%}2-9c]H?5<Ր6Wi3K,eS({J)R>+<70 x%KN2O!Yp'Yg̴۟A8؊&0KءTJZv{{g%mYz#z(@ znD7Z!TƦh㶠RJ8PJA(A@(P?_%&tcpQJMReYu6qg(VJOk=_i@{0W Y yug3l|hmN.{E-j0ͬE/2bBBhLB@H-@)8!4] Gl":̂6Y@B`1Ɠ,; 3tˁ٘9&x7u]BܙX \ȧkZ`˻#}_@81rL+Q[ޕ65r$ü"P|`;u8 ΑB%1Yݒh-hGǠʣ:RH4QJ!.Z$^2ap.5Q)4 B%]@(̳JtBz 99afb1R%r*<*Cu+Ce J:`o/9t/)2I)cV|zg믿xtե0%v&=3G'Ǘ; MیS]d_1]yߙGp1KSk},4sbJ<1#V̖qmx`@]˦^ 0£3$I"-^rͨ~<v-lgg!MqhsUL=b[49FQacSy"5Oֽnovmz5{h3F߭ 3po&|$|BN`խ+K"AH$hQ(_jYѳ*1ZОKwriA>u HVMKN&*ȧsVOdEN}_=~ }?Ϻ []:Dך洱ڃQewuNDNF=`*}eAjpkN*Ņq!$|@Z-sޱ R].Pp6 I?iy|Ϙ^ro/MCF_Mڷbv3%rn `kKZ23ڲ1 Lk[;v|K) E?0%^yD*FTͷKAVM|vkIC,WX TA&UhHnzn~n;*Uow?~eGNۉJR+|:_UtrWr=2-T<Raq)hcJB.F㔐\Z=2 "Rς eIr P}SN_u1e+ab^vNۅ VLJ~lJx ;' BTa$S*1 -d+B$bAq#ӻ.CqLLdCOd5#F(N$f"'UutT4a7cJe=^C|}y4?? ]mfSbums5 P{f=i~ ouChtq3m?Z:=99nVqy8iw6=vx_'/ ӂ0iUL66CS/GSư/{'4WHɸ?Q<ؿQg4X|=)g?NNbp_\;? V'{p[ݵvuwR|n/] 5 B@5r[7Nvk ^}o}a5չޗq~9|G9xT+rstnw<zf~7"2г`v4v{u_6dM!bSkA!ߎ5!ߺ -p,pfo I~'#\꼼?^c!SۈdĠu>ވM6DŃ۔x=pUw6դP{_nf4i{/./އ <@4~_?%x &GhŢ;M Q ۈq=P,L_ț:EN0& `U_UT;rQH#Ü*ҫHɯ|d1- AB_V2Z/@%wV9EQg\1/ 蒬3 ` ȱ/O5aD%j CBc]D:G0i#rAbyYjEiCBT;"N@A u84D@&dx\2Q XS]gKRMEQa\:ֲ 3ZeBy{J %Z*PS#%h H+O9V@E&ZR>"OR(@ zǕt1fi>[ɚf+Th| yS?+`u#tX-7X菵w_,s3tZOuX=N5 T|KN]Q Cp(-x.5qQW A\!S ֩82OKhpk`iJH"H *!dĉlz{! "(⯵/jRYɛA!Vp⦟b2G f 8|//0bmiQvvr\.Fi0d&ibb'E[>{{Nc[4J2RWFzH1y(|\9Ji4v`s0pB G!AH=X SI*|]8B+pFOLp.0ZfKv)ެ GҊy "JP `Q ,}nLA R1Uw10FqS)m0Yj.w #._g=$HIӵa@3kȵ^U0Q{AJbZwMmj9 SX"-3~Eg4*AAI_/)8|T.X.>.@ 4׽Z<1A{vᖄ5s9 FƬ{ HS@VЍ\*[CriʒweiT@)7vEK\<$-@OʓvUD`( %.&qA[ǖfݎ"<<-ӷe ݀cEV`8֬|{>l  3Q)|ցkiO\d%u)Ou)S;8)zdvOv@y :_[([I {no>_rTGkan' φHFKž+yZ)N6iW3U4YAڻP4|NrV:r_DI[!_o  }%޺vazN ^z̽CObi\.֢;OB$eRPMI&bD?NÞ3)ē.-@߂K4\."q -lqM/P(]c?l#77db~rAv n~MǹiX懟^oUnv3_o.WWz*G݃Mzsu?~E=;PDDαZ=Zu5|~)E^kQ rFӛwJqb$=?-_, pFWo8]?ʂ"wӔ o dgNk5 <@Ŏo0_MZRxt}y.|5'nK\>f,Ϯr.'uiZo O۽40yA1BuO)jlv34 x{/^v C)V~&H!Ə vȯε q,B/k洉TF:!V + [軽%c(l/ACև[AEaem6 (@gߧem6Pm>"?ܡ_U4֥JG%ma9}7nnx'0mv82N) (୺M}HBcW;ׯ^W7RgJ"v5H=P(K|]dE&t@wc?(DΑy"//vF@嵻.x啀=o8ɻ϶SZAx)Y&!=&I1T:[ miAK[- nm j^ ?\ķ#B z֤ӭs`v 7GJtOy_6Q#s$P7Qbee8HUjIBz 2LPHb2dLd}H%aDdN&8nJY5ޫ4!v@VZcU)Sy}E*չDK:* QPA%Ԣ*L!U0S)j2%qd;(:ppT*VP3Q_y⪏zݯp3TP:)O?{ px][I)%:zlgBY7T+j֒3u2PR9&z?!Ӄ.%AOc QOZRE$<Qz zv ˜!Q' Z)h{CDZQ:8rĦa q!B] C<, &?zX8g_PR=Jq30mן쫣-sséԤ0g:tT xk X\{-n@UChئ'>\nT:7`Ԙ*$ I[5SR54R3~bC⣚DgLOoLy.KtV'Ez8>S|C`t _22h RpMؠI?x| C ZOqPf"()_88qcj -.O%cRTlH]J_n7UB)c\{0s5+Aq4|q!Hj,Av9i$:u3c? K_*b<5Uc Q?H_YHoM5$ێk͡T]d+1)'K/5 ˚EjI۴p#bԟn[b_Ma@7swbi\ +=3V$_eߢ?~y鯽Gy:}cq_os /-t:ug¬<} zIzM~oZ_O+F}cwװeW|]1kߑ½oF1C}s˒E2usoi~kAR^|<O/nҥ~\kA{ x~ [^3{eij$m N6ESP6N ~UT̻?V#u}O}޸A' ,.g0g|W'WpuUrO%ݛnni[/=:k.ZF-@O@{!2t~ -|ِܟ똻 f^Hzw;\:}m+'.ns}Cc[Monrsv7gK54[3(< .l(k"57E‹3BD)"*b$Q,RMEjJ1J",pL-B<ũ,֩B)5*8F _Do oW"@FYt7WzF1!OƬ13D)-<RC "B)ń% I[s #D5UYCuiL6!d뾢9.q KZsy; 4SD"z4 YSw'6ETYF p%۠O Ϫf4:7&ᥰMt˄؂>\$#2l8`~&e00ǹ4"en,7Ҙp+ cR,NahURImָ\WaxTdOHt_ĭumyIIalB;_>@P#P~Lgᮖ.2AF+OPKn9#'{G$h?$n;bŸ; n7;5u2Y}KSOZTx\2 *4?kWWsL~F0LM;&fё!R3*^9  yTvL %J_ ޮ GNSBV/eQA^^=)Ṑ})CT2PR,?*D$^qlUq-װB+ 0fT@nn[p1pV * @hUAV_UC⚻>B‡́o`^.pɴZ*emO5R)  tKXPe< U7o:*J7ej&< @yikm` G)X`^S"a\))R)d,42v8Db38qNT, #ؐ "T)cVx26d-ɇjaĬfL#cƇfL,Д"'01!؟008wlI wl5y|5Αr3Z7('3&(RnΕV=p ӎGdd8oaWJ1ޯZu}5lTM&T9E0?.sgo_8Y־t Ȁz 0y?N<6qـp.k.O,8&5(*7qlH\RO# rn聰w@ $Sb3{k01lC_l̨@5UDu1X,8l])lmL 2X^>c˹|^0oj $&ZBϰy!bG{[/C_1]B( Ȣ46 6CEd}ظ0$NTB|q 19nQ՚b{W2U>o>%ʳX~Ԙ!yTAhl%8B!"9ٟz׏ȴ_~N#DmiGA53_׼|D"X.ޛҸoGˑn~VUKNya|Bqgׇ3'cv<,lAB唫{PIv8.ػcҏ+Wĥ*9cX86aqp*Mb$:?&Қq"+nFd7m|AL\%|F:e$beUʗ[Xx6sյhG+nJkoAx44|_• /icYڿeA^?NL^ޛkpޛa`PX00(м7u彩"ۊZW/sѽ[rzU@IY*3(֒c{6#BibU1Oژ'O ٪&Y9<9 iF?Y,A4 ڥaù% 3pbI-ErTbZY~k[tLMİ10tN1Hn9LuH9X'BJI15p") O LAhJUe7ӊ)usD E()ՒT nNDZA4I5±cRxI65n;$f!=)c"S գzEuHNeJ7֠ʽbՠ_q9A)?֒0 8'd.<H*7W)&K|\RLeRQSjPTvx S @@p) :uf#*?1^rthd}`;J5Ipl@>K4d ɝ u^E,-PQka.#*Rي:%9|)l"sDO"E`ar3'U"tA|HvcjM͋A0-i1tFgr7nD^fuCoo'N2{ՏԅU 5RŒs؉DO& gùU~O@]QwZta4M}hBVhČD$M8Ɠ9XH!HMN \8E*lo =wJ^ƚQ?*a@njDf8esCy8=|zbMJ7iN+Ce*n JKgj_]5A-Y-W[ʵu TFG3%sc`2jO[tX ȝ1z*Ģqb&f&+#iQB^vq"}}Oqoߣd1T}saKJ|ԆV{q9XF3X hC*2pv"$b;. ҈tYBK%[]fM1>`Ř&r',­ܜ+' 9M8b"ɍ}r,V(%`B^Ffxާ(kd G4H㾹r-:DZRUAHK8p0#[I&fϧZ64ۦOXcG&F$$[!v/$~}16W71VZ5Ua(V(q8 FRq`Dsi$.VNaFEOHC5Kk@hVN=q,8bOPyq4 5?U͖%m|FɺU֌nA@ +Qm y0iپ"}F.&G]Ku3:q[sX?Ep_78•p1f}z 7AhIáxV QȜ9dVh_J»U!؛8ǫIk?>3x@<ݟ Fcƻ/U[bx:i'mH8QK*uVj!J(+uN/UgeX?N~4/ϋJ/?Nw ǹoSuډb6);XKGrK//ܿ,~YjS :Uq#mjcاFj=V֙4x5.0,[VUA锅&Uє,P# d+Ueqf/"CF_q 8|킳|ԄK6jtd0/N Qhќhݸ9 ͋jfዳp\RFT Be3+wU X!;aP+egRS_fܘM.-;SU)'0/ʎ.(;ݝL,t(PcxDHsm}kB‚Y &2,tdC V7 F<$0CdVX L0+ Q ˠkofy$3,2`I&䒄>A B#"q5d; R2+ @Dn& x( V@ s*C0) 0|^!tȡk!C !cXGC671g`T!B@4+ Qx |5 YBP=2 ^JJk>#\n.R#jG!)O\0q:q[a-58HH.3JZخs]_{ c.Î&׳% w><(Al͵?tFAG^?ɵOZ&};Sd"è3cP@!.BUhHn@(c/n:3"eH=E~]O  U8"3Qo9,O޵VlAĘ33}.yic[7H$>LN/Ƨ֛*ir|j[x>Mn_kݯd螴QkdBla?wۗiۭw _6=dk钻wp ]s2޸s͖+D]]w:Zozoa<й?nanh~Qދ* ڽNwR Gqҹ"st?s$y7HlLKF~7 dM)Y'֏\T߇0_:P%C3k2ぽ9vY-pgJ'klnxu#bpq {:BBUu nys7!]ﯩ;*}vaM|5#Ճ;p=U[Ex ʨ^H_hS"TŀpC0cr]b&qJlh8:gٍQʀ&i[8m]KA| cɈx?Y{׉l-:ƿ~3=Ξ_~~k~% 'Νa^ yw:]h:??N|] 0vn&@:gGM0$ή~n5%ۛћnu$YWLuAZ(L{uzN&yJP.q}3oqugЎ//o _( +[ [+\:4;s:JPgRCLl\ڐ9:[o; CVX=cOeq t6)];Kt$ ~433˻esOR;">["z PHtt f*!S1L9 |?`p~zH`XW\x<67]eTMnGi'pRGCoJ`t@*7@kw+RK=2k^It>qQy5覮,kf$"CRq1ꖀ!əIoYK7g."ۜYCo\_1$/5WUX|m!>K`ecXҕ#P?4߁e r`'jO)y^O@vbX5__OP "۫ :QO`#bC%ap^Wܽ'XgM:U9{' o[zWQ+*9Z.%Y{õ<ٳBax欸w`+-ɹ_)Ҹ{0?4-Z_CP1 :pi~U %0f*e&}2Ao}$^|<$a܊UNerݳXq螕0U6L[dp]jm#}h_գV"-UmHOWMҼmƚ3Q֋-ōBVCim! ~x ǰ'CoaX6BScӃK'ngRw`g_ lhc {.Vo"8: FVێdqyJ-+q.#!Ff`KU{\`KoxSTQ9oAd+8yqM>>)t8tXɬ%E7η%4l"vAYg$cOknl-"?kܾҏMh5|4xíy̑/Ά=ZШ2F6l@[d(`6-B+c_0=q0 +ŞrMϤ|p)\"]3mWRӓz]S暱,'Zu<%(;GY>GA82{{ӿ0t?CybwgY9uԜ*@ND{vqq|‚HU%"ݞ;yB9[“K6oMc5/Z*\ʎePpP1 XzEMy2z'&X +/U%\iV9[JXmPY$3-w J@U$,H=a˭H(RҥkhG>kL1C\VꚊˑ&,>R +[Lt9G/s=u+V6hz /?Ke6f1Hgw>;e.͑7\ M^ԕ-t+y! oV>:kZ^' VKG 8 D$<7XG O']h+Lu!,1[6%[jAIJrF pbǣT>j', Id7uCcAʗ uEc)f]>(@]nM%PeDzf%g Ќ^f,VyXɚ%!|@ m^Aʌ\MOyTm?_085X &kkTpn lK"WuJyC K)yn25j布IRxcJqPUgO爨kw?kRo)G@Z{K%_F@fͧ rUQGZB@{x  A}ܠ*bkMZe\j3Z+}{}ko5X}ue^k+r[*CC-B-GHR|X MNU vUQpB>ۼ Ͷql-#+Jz,$(+&Jq ]Ua(ą~F ./wj{!Om$̤ I$ޢ]%HkmJwJ/0ˡ.ת@bqW%F#6s{#`}ȒImrJe w&:S gzbr.3zVgkt0gݎl`J%5/hF9>ȁىsm@"J6R0))0FFTڀi eT 9koTޣZh@,QgVRmKM5A'L61=Dj4:'duTjl{`չ !؁Z1^9?@|'p lq,A!8`֙H؈kUl :RS਒`V"]#ZГQm|xg~<}wIv|g:h 3L Sɻxw6∦d!JI5)ҿ ;]?2VEz?#p m)' 0{qs3*K%0>+^eR#Ow:"ϸ8!a=ɦ8qBkF(|h ~2`3.C&g%zir=,bVW8~@ִ6+m*ۭn!)ӽ-i2]ffq9oFXOUݏVh[ O!,FWDT+S;K'!@TFCg߿0cr/.Vo"8hwGWެTFVێdjpq}Yl݇4mZ IvJf-8syb=3zļgz8_!ŶT=@0`̼8*qMI9q{IQ͛nbaD5N}Թ8>grp߳qg%J!{ >H*R:vSli,D:jz7%}2v'>e¡8;S`V*5['=8~C :E)eб3n0SgYJJ$܁\пΡT˨*_?)-L&b/nka|"0zq/0wRUJ&̭fi5&b o hj)1LS8gΘVdZ#aGLXbv+ǹ},F 1X>%hۢs 7-}[\;^ZKX[H5R%$bL-E8R/6H @9"JB&<2 ga *G-Ig{\e5JTN>ˡ>dk+9um’58L@DKp*T࠽# e$^6"͖ɧܳ ;gA# :M^t;gHy2-"ޅ,Nqǽ rAQ"ai3Уs)[?YYRչ3fjNEp:r=Y4J.5<0RGq a%byU&b|֒eBK2op[2qaÌ7q\>c6>*ݵbOa8]@KABHQ8KTrTE%9%^bioEPk˴IU`Rh#9ETfbǐ6hO4ߑA)}dUS*]yq;W3J_V>4K>9J9 KFq&*RX,^jG0+歇,mX_gh:PNxWw<ɿH"5;G1%D-u8r޹M@;KPuC(W c@pjPз uxӇHp|? &JGMs/Gi+qRwo.dl/WC_&Vj'Oeε:XLc:e`cFT8\EvuUߤu pNӹJ3`\ \,0ډ49lZkX0{U@/B)1nF%*qp.1E^z/ BubD63U%d)éF(%la).Q+oIP;J @fHSA^D jhc8R@6jj@cS+ X8J` 3!YjC=>lS}rU]:4)#r2J UCU+ٽ[ 'wMI*ȇaR_?Y1vGΗc,)Oz^`ا/$VnqK+O[1O s+{sNzTB< jz:,*\Z-o|)tŠt;2|JLs zԽ mgSE:!"IaaQ,NA4eR%BH$RJ c .703 ;IIK\/7TR&3ԁ~ G8Yуy+6T`@q!'C2fH4`b(gJ2³)EtXI%!N$NE^(?X4 {^*@q駀-G_ ±\zi0zyá\s)>.e(8XߞN5rhNRÓmv0_u-lo; 9-պMlcl+[WMtd ]fjB&`[;w7L7_먶[˚˔!*Y5c+b-1 j:=6rIeJ-M* =)o0 ˛`9m؟'P)XqDP~%hZ]D B?!j}S?4W.qoRDwާR;B ,/Kq"Yjl귦j*߭X=Hu%Å+Lzm&3Wu8ԟk/h>GeX&ܰʶGPݫ| ЅaḪ .4Wԟ#6\Ъ& ꂦr|.OPӾJj= TԮ *5&UX ECO(֞#*NAV OLIa)u՟#j~MR(!(X- #a ̬^iHR=wQmC>i h| 0 ޹PEbIB6Qf'CפZ_ $ս\!\*b5txxP ӛcvY"*}-ޥZ[+QM<=Z`IMozQbeHÖ{.8,TSeߍ YUC`C;bJfOb[dU]x"sT3et0y熪(g887QggI,iP|Hb+YDVcξVyA9Ë'VDX${"v1HZd˹drEvF_df*[JqhZ"8 nCBe} amJ/cZ^2)XR_:T&*뢬@x1,~_Ql˫vf7-oWBbp_ 3Q{_-mֲ; +Uȯ 6߰K(mq?B)ydKJT(lwk`@=܏DQ09Y;\~1^L{.R\3yt˓뼡+$'ͲY KIX/TH=TVw.W.bi-<5+, 9c,%ӖCft`,T(W[CJVӕ׿Nglz)Jm >fpKY 2{x:ޔ[OjBbq|7nuHWE7M?G+GӧOM/0xc}Y 74҈ "f(giptXt))U`Nc}zx2crb:"XP ǂv*sa= N]_bVFC%*'l 2%b-,c*D/ k,x6HS&y-Jh'7_N'#\ݜfPGd_Za?W&[EA~@>x]|,F'hN&@Q{g}{sE;_UZZfѦȂY Eaa%ܮͲfby^5`Z\ZxC:p.(8 od!^?'itȏcX>ukO`2>D?JS!:GWLY!TO鯊zm;UiY+gtsJÝn빹XC{s'K ?7bÿ)wx]|Zٱ3w%B{mW\|omk%5%HpR);; (WD!-+M`'hpscW1oVbGj#:4KIm|-p%etneDόgƅ 3~zuL GPڂ\:e*jݩn6.o//a;10GӍ{k6 UVFeGc F>pk8cS9ß (o‘s}k2l /ٻ&mdWTrֻ1W!qRsr슓K@QFŎPH HZ+Jl"h4F7FK Ā $l,UڙTxcXcie!,E2S'239bb_"!xPFpb(a A&8SJ+D2e)L50Jd(8SIݫM&u&2uR">xG!`Bpγ%#L0̚;ߕl? 9Ϙ{0 3JhșM}R#gV41In3c˹sPRCtA;C`اWapɆ.1;=?wcBc@65G&m AXAh#H:1eaELeLT9wF".$1n5!㬩뀲L[gX B8?VLj[1&D?ɭM@۰A SC"G{kK_ Q<2w,GG,2/b/?Lb{_EKLq>W2;p?&Ϳ_LV^r! _ #.)#R ~K^|~8#VT TPdc W>U2Q}z̭0i>^Q]_7Q!5`ni{$ooy Xr]N*xCyGac9W({}~A kXy{=M Bu 2 [uyf4 >o |fd~`a햢ӳ^4Fr5'uX5|J)8T>,vlӳK}⁋%a8Q%h0o)A毃~hឌ`T8FyیFJ*wÝa(y%F%evNcR`{<aq|8ND=#ouםj $I죵[Hyk{ ǫHtr؍]^D&d<OL06wPۏj8ȯǀCV 7~ZmǻMpPp8fJ0ԨtRBCXBa`r sbBYAh @b2j BׂIL;45cm*[7!5Xr渎[LȘpiBIRDbaaɠLJ_X$kQS 3$b *,U3F,D(΀2p$K9?@JaXaJ2eDRf!@HhES hfS4b鮅"7,Ou1H\CLiM1I( QJ5ǔ:fS3g*uKOKMH0p9BT^?}YpIQ>0 UI ;O1T4Wп:HBĭ` ꘫ ݮEh!s.w+*u3xi: ݮΰ@حYH5ئpӀW/5oK_^M2ELNsx[0;O-O&?q0 %BJ@! DJ턬4&-u(h3~lwufyA1@1fH:mBf6!,`x$ma[B{)E}@P}.Wæ.{5l4?VGrBhא!] J؞%>|cgПO7Y?(p jsi'Ly';l0~KC2h=Q\ESQd#رuJH)\A88"YFMK/ķa+EQXuiهR q2S|QL|/8ȓ"6]t 7{r^/xݕ3)/]hS CI\/BtEV6z{_Ϝ0 _j"gx)_ai0I^j*p_mNv٥ߒ>K |nrYĹ! U4F{k`b1Q6X3S@;n nMh7Q:UeݢkHMd;[~q#ninGZ&:Yv ,Ghit}}NF'͗Q&xY}Bo Y20`xB`HMҍ*wL:t.RȕJ+A rc_ދ'$!ΛLg~${/ꎐIz+Ώt51akV莨Qt(}CHSr;FFV|{/[o m #d`< dg2@HțK if,Z˺N}6FCqC}P?u*85\w"e>-<Ml4v$\$/4{="7f>̔/\@JR S&˩B"ȅLZRgRY:ϧ<7>yKe 7U#Z{; ׃Z?.C ipSؘ̆˷B*H:̊@-pMħ2*3lI-,} Ñ !0HnR3@ 6*fb'`S5:30*cAB,R8<ec $)I$#I~C0#K|tSg¼7Odٻ\fXpix{ ׫_{Tw훈CJdÝ=s!o; [`xIѪn/yaqruew_޽ˣCea>ֽMW71\@/\@ R(cX”-kw 7 #ˏVyYBpSZHS$ƬA!)MmBeqtM{zS}p}18-SI hȻ8bDTj-̄oj6x~5K7>l05e$vuFz^Ȥj LoAG2@*K~? i4`4&mpTAEWy@DU=.yyq>9q,AƱ[D{:\+*Lswïzaz• `Kڣ#={%{ (+{b([e!< rMo{UӻɗqyZC*pOV/  K\~u;qE6AGp1hLz^&8xJp 58;^IAJ; T 3'$%2؃Z2Ûu6S L4 ӔU g:6KbE@1i+ )9ޮ@U7g3`G[K 3IJlNIla NIhFBU8ͩU;:7Fj /pJc19DRJ3 fX10`ȈP1Ań!4FISGǰ,V7}MXBZ`F.5W `u&-kK{2}!|@ٍ9BSVda7Ac>r?٧Mwdi͆E+mY_E3r߇l$7XLFM&Ȓ"N#}IIH$dW:ovMCazٜ%,l?OؾhfSЋ6'>ؔ Qt׈aN#BB0_8UP&T_h/?d`;D| _`__5d$oH\h89M$X %l ;1?ˊ;t5QdOQLj\Gk1FUU#z.ʀ(bu+؏ͱylVrZq, zB,SR12ӏ.άĩvP&"_IJ>{>RΞgoyI5|sΑː.'tҩӥSˬ:e48𹰆(+EaLBjcHq g݁ wkU+#E.6Tˢo/BU~J˒ǐ)P%1Ӑ]n'm3l`={ٔ#$+zWK "s4 $2‘A4b0FQm o@/)$q\7ò+j+0ZZ:cWEg|-D$HBP[=&0첫?XL2uP@6"3@2qQnEf/ nqk?7cWq|w^dž7'p\z=yFd<{:${GZgvg'UYw' јo _g'+tM>>31Zo{=6 dlXpݢ)'O:+0.yBLh"#2oU @+vosnwl-gl<(j+Y"2<@ \C " ϴ{/9 !]ι3:[Hэ0_CP'Env|.4`N\(4!!ձiO?$lI2|hON:\ ۇ^? ͉q"xՉ-`~r]@ā(Gזz̊SZsK#Bw53s9~8IkTmReuµ|0oIKqJ-+J9U!c@9x7A$9`FKB}c% |N)4'Z.wG)xi~W'W-04Ls9B| ӝs:5Ipw7|۰qxsS0+x#;ߓMɤ@0؉G00^}!dO#LƔ Q0C F@IKa`:n7ކ_ ;" !a uv|ډ ƯCcm%p8E E8j/-"^{[c`1g'Lu.JmMf"+]<_u<7$n:}Ͼ ozwni{toM\˞q>q %6]8#,2epOJٔ2)|dnB*W(=̩) ?ţ$tԍgoi)؃+TVwA.4O%#$=Q?s@䯻s=lgܹɳ{/O.Fbw,XGb *@i-[0"oKwQҋoLa^e6vF9;|;}pq0'>Dm W^ kR"2c^u0 a=Ŀ0t4-VN~-w;mIN;dC'y wf`N/owO, 7zUkH۹chSzӻzI$~j8GOa(ƣ*I:mYnHj (H:'xq0uƸ> ͋v'_pݻ/98?>k@p+eޥμk<;REI;'oȸI/ߗtdmҵqxr'-Y74c $@?ީ]߇ؗ"!q'ipdM>c˕ UNam(4EZ6EG< Lce"$W>0? l!'_%4,w*1ՇLf_ 'W>|kRaΙ纥${d"'٢Ы6W+2?#J[y߿1ʀ/қs}Wޯ{{+ od!r`&zq0 ֻ߳oSX=ʤ/OcRmgʜ7S9kyP'J*VY:Žh R.gX\s;^j?f'$Xvqódu~KQt8KjqGJ;?p=תD}Xp˜D&>n`,S̗8B$(U"A=LFJH/.dh]D<-5.;llD.~<#!q˝܆*V"FWQ|>}/IOehv{mW D/]~~TLHD*YQhLt Xt{)gU0 DӔ,·X3A99]V7>k~8 &m8thA zIJCo91r8DK3]{o۸*s=q@[iwqz6AӞnaP$ֱ}Hߡd;rbǒ%9v4Ml"gF$jf2a3"#. -(E{R3 րK LLܝ?A?Q#'xZڊxc@dN /K:N8=dVسȻR4б;kf#lDKSAιPRmP-2yFۍhXۿ؇ej7@oNC Tk5M6kikwrҍo-锢luw'܂ٳ.HSlv \pI7buBNX :مX˺++O0@Ҳ( aD38$"] "\C%uCƷiTXxx̳yv2NX. E>E$$J~G 0/secLcgs qنJ qƷŹ̜1;lNֆGSp>5*&BmP~-)U.C_rGنJGƷVr M~,*@䈉،rRqa܃[YB$r;$e~'Xf Sik1 X!mN? +$:ta yÜj^cpOa柋  szngd ="f#{= B38NcQ %fzGv#e|+C"O䚺sOrN~b L`qNω+5t)@k#[UeN97FpZpLQ-Fbv†֙[:hGtMӗG!(q|^)MBX3rf~yT:0 |ހ7hnMp+[ퟠQMpFɌ ^&ŭ䧁P;>3: fH}P>D=?Pf}>]Uu*8B\ %xZuyr8JMʌ:0Yn fE`2v 8cNcgƇJs%Q1qE*L|`A=t:h]TD+p2WB{ż* CRpSc:"[ BG;XUF[RWlHL"2”Z,xP^VZ} 5{P0*hh-1ړJ)U֒\҈%EfAvQ5/ 3t J]űSR?or0qA!cCTW#Jug=,;!,!;xEˎĢeGcZ#Qd6,_*1q5P8+Ū0C<씆P31N@8eDz\05̰ZSq7)!O#GZT#;8,L1-#L j\ea2f:`J>FW3%G|fqY#"aRC .KNi@Q a.#[C>z7\<{k9Ѥ5w8z]Rz2#-Ooz%Uav望W$C;L$ ܴI!=DD \+*qKQ%[z˴ּŖeQU&x\寫'ǚ6="sUY(Io h\@YUr4[aɣTbדеH=PNHv9DhNk?T=:kQX>Dp I<' X{3|/ڒyY 2_pYx --C}#= %<;3'ZT Eo\O}17ܗYubMi-D́sUI􋧔zub7>Ń &9`j/z(@`Ldgqe*5li FQ7%+xsYe6,q{B%G< ύ%\?z3z3_=Ur>w9mrmʦG|s Or8n?$aǍOoyH+zu1),Bu,B;gj2J9ɦ^Za L9~v-778# ]/.Q6Csm}̭<)ˬR8g1BgV?O.ܱ5psL2;vČ;ն&3ޕ1/_Z$SG$WI\r)uֳ^{%LĺlEnicy:2+^rC΅zGɦm?tɼOXhΔ3Gf{(%˚S+N.S 692X-q8->IYCߟH9~x:eޕ'0W6 ^1)ݻ2c,z#?9gtMph۾s*ש>=:D"Ϊ\I!*ic@P-gRG$H 5yq)PSVw {yMEj2f"DYR(r!BT وe:u gξW 8ٛZP]q9 b?!_~l&Ug.$LzJyޕ?'1B"gַʹ7?;t =:B,8Z>xW7dD_i\ h"$4$@BkZĺŐ$()'KxXA ,֠}r˜:r+˄u )YxY,bF1RuNCR3FfڷkAZ>x%vH' uS5sh+,- <טzU6|LINIxem+Dz2LDGҙУ([* u #" bJƎ I%,,ҌsC.rdK׈2_DpWO>Z% 53+%W4YhQ 3;+R2 U%i jHyHJKW۠ڗAmӛ8>|FsbGe%Tp%TZg=!@@ ;@z(j,l;FuOX_n̕OSwE_9a0 >)8_u~l+ڤkzݿT 7#OFߎG]IpZQW!\4(1&`Q8f(ycXFF0`M$u4""aL_| un0D9m hoV)f k!oL: Sڄ q9i!w ҹ(9z 3d:n?mѫո{g>/7Qb0@ϋ wxR6E]__$Eh}g[h ~nj}OA&#:e|{ӝ{H&%bބ̯'Bi!3 de?'L_M"hʸ~[{2Pp'~bSx7r%E2FJ"W~7/wXhܽC%&؝ǽe*$ FL6%c/QeRTIhܚ \K­&W0CxjoNRZʂw*wD406Ms3>y&!gkq@{_ퟒ?wu=jg܅5ɳ?xO?>廴lߡF`j]kPF@@%r-Q1.>IZz_o!AI`:t:P ӄG~|=6>z t/ѣ~t' դD|8X{ߍf{>yP?)ߞLF:kboW2swdcafhnOowL쟏piƿ9Laނy]:w?χ+WO0 : R+\x 5'&|]4p6JjR׻`:LdT'~8K tW({u̾?O`E4'§dZ3}סּ]h3SA??C@| PL 7 _^P wJ0ޒN|-b`8Kao,* ]hp0رPDV`\@X lXN @El%zbX-KjL~:Z;ũpo5'JI(= ie,f=xVvA.hgJHS_n\?UW 7[3b]p+MO\eMdJ3~:t=;d׸^rx_9k@#l uZ3@nf nx*tTᾬe>!p[)  `NnU^7hrÆľ rV'(l:Žt"xַwNͶ澽xku>Ͳ]$6}涃G-&}'Ъ}￝G ~z3`g/Y/?a.E>4gAev)aQ\Ƀ1le|h1h1 Ej<}I$a7z@o󍯀*Ee:FH8!tʔ`)lXҙ{9̅J8w吊 3M s:{NqT*6$qԉUJ XZ2UD|Vu.yK8DApu"?o|xфG弳sŸma)zgC8Rb nE~ښ?yuZnh[91Z7e~qqspeȎZA < !40瓈4X\(q"0g0 VlA0Eo^͉pDZ#fpNq3¹I8R.ҕT,eͱ$5*g˦R9uZgT! uM*`=%iT[^/+9ț_ 6{󧷻&hn._goʕw|ن^!臦0 ސ2"=#MWwM`B/>:!yj=C-MҲj8 u3D)1#St_nh5ېjzR%&1ֳsaUss+vm|ck ַ{o^<拋b V$CxƛLA)Q AHp\}l4h 0%xi#B3OqKc`Sɞ |;TRv|1p)!Ŧ 9 LV_?űXʻWj޻$(f7_ \7n]dNED'<5" Pyۻ<+kby/V9̆gRT!~=Kvtb`G[(dxwH."T˸1 rT ۫? $b0xۭ0ڠ1`IgV-9MO>DҺe}:vK$)=]6b0A`foR \N2I?pr!X?n3BHGF;Bd\"?@?@m8Br?}_&3QdN@ޣɇE A<̋a3xja<ЍcH01Bh_q *$s+՞ #YQ8 .Ǖ}`0p_!UxԦӓ\ZG!TG(HxQ[MJ =x7yS@@.^4HO( &/}z%qaبuF_˨K6bk/F0c5ӡnFHރɏW,H|JhI_mdvNκۣimñGCwv"m>XRٚf.bԎ˲S.Ui;N}\ D!FlHn,ś-UP úԺAt"%ĽSC2dU ) `SAeHu#r)D!+"Yh6"'g/쀈uhXM$ 4JO\yi45ʹ-YYuUEv9b!!%4$-8)uE%E^IB_[ފ(7h]x۲1+bo:Atlz6~3`N=>IҠ*'dSЅnABavH5(vE^kײҥj¡?B *ECWsBI ѱxFa#mUP2h)iI,Y>} 1AF[lVo圳՘YlyV ¹Y(ɌC, >J?9[S._<4H-'@#5'qm2Ӝhٻ8e.$L$]\N|kJ'LDO. S"JGɰ+'6^j2#b䕀#f̎)I &A(g6&-jw!$1kaא(뼿‹RΛ '1}9y0 dyOЁFlS un߇EAc'6T리"zܯ*/nn. ?Rh:bPI2׻֚)Ggg6wPN&4uH ٹGچs5g>ӔIkC 0VQhui*G;4I3q ?5b߯ 1izL̯6}qs92q,E{}Qwsx/F)Z3"Ջz5Eg3<.w0N~ؠ>QpK ݨ]8=ܧvt(yAl!ř*A0[OODt-O'gیP0*D2ACV9( Z?gCY1asdzLH͟bl5 ˕'Vwu~k8rnqy{7v.?٦Fe4܊YQq9 5|Ac*.6.o|ۧЍw;x772Yig)CZZ aKPZU!"yb塻ШW!8)d GEtJIq#IrqaK}*d)8U)L6q[GӱOG-e Ri TSZcAU J )KNsT*) *bOmT>P8MU(*IKhQW ^hvi5>MUŤi$ yɊ/Q വЪf9'ƒtGYK (f6UCe)*ӣ$l̩O/MXCt ЂfTKƒ%k Zq]\1o Jsh^nՠ@9jdaVӜW()t̴*)F3 ). \A-k-EnxfM9 SDGwOCKظ޵+b={ωm6.-vEATS6dm6w?CJ8mH,ph߹f*}ᄅ9[=77qS5%>(TMpA1fhk8Ơ }nv9W&a1Cb:nuT+(V$*˽92 wq>ڂ*|EaM -pY%I euč̤Ԫ1Ll`O~:zQId.Ly%gGg[v鳲S<.⌥d,\|NΟ̿?xjGg'Fl-o ֛׏OZQ'3N_ ^/Lߌ'OA<*p֫`=hN&1T,n] M:c:V'3 ^~-/eLR\]=Ez EPu(wWX"oD9rיChQcX ( , aBaדpKsEeRrBA,uX>{ƈNkx. v3ԏ6aH9!|Bjȑ"&fH$)0)DԿ\i@, 8c"6.`hEF$H b !h "GbJ-șuZIJm 0@D)}Pha؁A 2f0U!G9C `([g,;g{9L hsv:8?Lg@p˯eq[>;Pd>4$0Bfa X7Ilߩ{:ۋn7qxtgO>AY"/w|e=0fQ=)A+2ԅF/4B\JZJ;^땃xI\n>u:F50 bkN28 RÂUd:1?vF78:wq|BoeQ2|-dk kݝ8YN%޿OxuՋ7/IهG6kF{5XKz( }%^,Rᓴg^>k6?q}ss +S0NN_$܍iʀW_ O1M=}x ǭ&#C?hpÈ=#ބ}^#{uw~;r؏Ûޣ's3>ORh潟n>6#}h:?nYߝ>[ ƿH٬?/F)G7 W  y7›7㉻Nz?-*8ufj5}׳d8A´WW;4S 8rQne ,Ȯo}lQ27̤̖Oʥ mdmb^sL]W[H@lizgw{_QRV _PHOr|ik!jfx8ޞ'\I<{ޞ|sڒvu`\hm8]e I9|x""ER"A&T֯ j؆ !q P;lX_Dc{EQ?4[\ymc cfV ./o],9|b@Ey~]^1$W0HLT];qZ1ץܬ#(9 vn`Sf^,Rh=*X̢ ⌓˅ДugSQ:hVcIeUy%kX6M< чXF۱yS+`=Bݨ T?_K{g0KC3d68;OnJz?U]Q@ig ZhNjr F} $[j8 agٍŜr;ERJTs fug{XUr}k k_X(&MJA^cQ2Y"NQ- -;JaVTО?Lhkśk].sBy<$MG/ `E0V*'KGy!UJ0- $ɵ T ͵ (' 2/^=Nʳ5x5l0lWlM;!B'UMkngCQ\[fVDmPg%D1׶az_j3"/B TCH]]-#Z)23FkDr'?St'I7[JEk/ g<]@QGs8i;ŷC -xF@Y%&0ȻR%撠E\o ų;]wLq.dݘ_cDZ*~@prj.8JGQZw`6B΄xHG3(}geYfb;Tʅ ʬʋ=r9nt8P';,8U[u/;9t\N]]R7G /3ql^qYKpH :*[yQym4BH6.v2pK9Y(096.ݲ=0mYyܒ\˯e h6.K tD+/Ak=VCyPʷ@DT5jAA2ͯ;Y],sj>] FL 9 H>G 7N}. ޝ-/MkC f퀄`| 4k+Bs P #hbUdC!e%L5Ha"lf2R/GvppX{G4RճJ0t0e1?d"¶X0JCʻXF`I!%bG$RQbN6/`}w4 +>X! ch,N. mwK&QXdn{HSU*c*ٛjEE>7D)+@SY{`2dwd:ş͑%aG?S-Y.l>pބKyQNzSdf8Nf(o**oOzG}ZHoT#7_nnx<9ɕazA{;KVZ7OR0L@em&i 0IR2nc,W=h%ʭ븻XǠݪbPDtQFݣ-E1V@vAV|"ZI?z|St[U N:h{sOs ݪ/onuHw.˔VV&̏<9l|&4i%hwA wVb*j_E.mVDo.8\-Lĩ~8UA[ؕa+1&bAA@0Y9.,6Y0GaNL#$rjϺ`U$4"ddeh@MĤ&1( $EP팶oEGC_z,ϐh]M[^k[ʭ{ƶr{d0 ; Kb%X#a"Pp#@u\md-y),ǖ=;r:j U+Ym[kd&Wg'ݏ&;Pĝ/5H<(Q 1!@DD%:$H$DG8TpC0ބ|I):l¨N-KYO&Z!~P,vV;"|CQh4 ]$X#JbFA I7X\0'))yPh!Dh$A(* ]AYB/ .hhPl(,Cq֘F o8'9ٛf8ª>(w*wH]:R-eg[ƏYL21kA5Xl^ X:ُU02-` %HR~Lpu6j|]zbAP4a P)iBBk)#0 @5&D)K6$k˴YgUBY,WV#> .V \ $06A! tDU`o38{T܏l1l )c8%zg$s[1 y\XUF;Jc *ݰTNwJN)j5曔KN9MafA]gvDCWr݀(ƯH"*(!W~j)W%`7`9GXo'zUUh&[V$W˖;nOv»EĽNs{>Ŋo+ԘN)fXѕD4?Tt;8fANR\(?!V _vk8B %*gwfZ)4KUl1KUQ+ב#4\"5vDYƹY(`c_|aG 6ɶdA,B&Qpmg:MsNHzkv].U94/{ޛF s~ Eynvc9:yƜ``w3y0h76aW(FS;Zq>ekgckzv]u,? MA=j*}ܗA@0w\28D* X8dȏ8QʄVWYD!m;\quljk*1”ᚩLJPJ\/ڔkx1*'םvTXҚ'e2jQS'.д2a%0^T\llStʈ^rk >|lUT\}fV)!yBD\&a௪d+$vYHv=75F'qĊ^$"dqK0{kqψBk!+YR>IcNP-е]բ{oAPM3jlmepjbwȃ}'׭n`.X>tjX]d h'hu1îѠú8bT2JEXZ U->PV"8(@ְѶ6ϚHv=W3C[ @}ÁiIh=D7n 9`dDHZg4_Q"X8 qBB"H$ % ȡ$!$"*>\GY S@*mE5N %)4˧/PzGk3WvL#ĂY#r{ѧfNO~8Ÿ bH=jE Hx`hLjE!T!,#$x"I()Ș #XnZ=Mk30ZGR&$P(D> >C2&24TaZ?M.Y , /@~qdh(˅N_m(g^#?T'=+K=N==_{d|{2Kbv{AOfa^.> h %(Š(NO_@PGjb&@R2TH8.[<}7@]4] Adp(2af?*a`>aӮ4 8ym9,L}U8nS^Qߧ4 z_Wo޽w_7޸{|>Oϳ-a:1+&S|< ayv < ~8~ҟv_cSH'{ۻM{ Hf_v}weryyl\2:%GѻA;_E>x49yy|qdXGlz^9qڻg4,н5q'i}P!i9aO(krfTI kT,[.?Oc(:]Np+a 5]C}B­7@[UޥS.K]͝ ɾˇ?ǓOgy]>LvP'u#S[Q|!/@K<6Jd.S1.>ɞSrWq߿0AQj䱱דG.qz9G]| q<WA 'Ԍj3qIL)QSrm+ȡ/”Vߏ[=Z{OV4 zn3cwP?-*C[I Z=XcDjj=x̝zf*#aZ17}lrp[cC̣Ћ ʏcO3y c?i@V{Y/%# Ĝ:xJmhc*"o#/1 UP8l]W)CA>p!m;hV8n@u#$pbw&p=E)xě \`wn -N4~`<H(֣s$b5HqܠYz^VKEl$ѦL i$VϨn2Ã7aIzQHi{GMaU~ykг׻cl%.3 6W>36vw8GM6b(.N z|Y-+,‘)ZZF? CS] 6cԉ'd , 5]VBK!rʸ nO+j& XH̳'n}폧Q8mO$A4b=6$%VjIG''A-]AGTԋ/{x a(U2֍`0,h:zE}䴻͒2:H}ikhm>Qa(T` DyT@"ʍ59g.ٔ^t՚6Tqb՚g<)Qqf9c^5ۣXgڟ۶$Ќgn6v̦Nڹt< ڜȒ%P$P$~gbd'#oxsm^ BxmDe3lFo|Ը=2]p rw^bg;'m}E^"lrW(o22h3cgh,VufIOS[H&"$"BURSqb-|y{}6'}]Pbdlu#3g =c%(/1B?'FjpC)b 9-3wo^t'̘*Qvx|{VcjAWwp Ƿ۬ [k lW*aĤsF86[ y:!E/ H2]_\y[FAS4D0j`*? ATaC0H>zN`[J*q`?PYiIvjJgC<Bҙ=z\0Wwn Դ43l{@W;ҙaO`p2"3/ !ʄ9wYLxYeIl*(HYTkn&m"cQQXRJ7^wA:$ ,w=1 4 .T`0.b.&ac8sEJ)4\ z+>- W_ V;Xy]0jc Nc¬8ijs1U@(8BD ّy Hg#\2xZ DT18Bf ajugFNcV Cvo;Qm2 bKm fqV 8L aᐔP%B{$qF1 f>mB_nT}{th5 kQJ1 ҩHr S9׬.ԔϦA.>O>r)  7%=)>%y<f6sLq^ :*5Uӿ,'qz}\q(/&&M<\uO):y6[HUt[h68jsp"n+yKʡI$8cL;G< U ]ԷRr3Eq>qhUԖ%0sDUPub`w >. -`Ed$\E!p '18΄yaK+§;SQ%6ס {Sܘ7AW/ɠ REoj׹3FCTnPiڮdk$o  cz\@h%6V1$ up$` ^QCXir^prd<3(p&CZ(ʓb;INR7ܸJ|Y.&>?&GwϛYū'6 pN\f:Nǽ&<B;=^OOw> <{q}OdPY !TtLÇSHBDB[UX*)%؀=iG21+#t?N6gUBOK"t=WbFB"Y Qi ֩д JGr`;GG%zy"[`ksUSI~tX?mNrq|7,K nA݃_&(I2UG>MfWpeҠƘ)A+%3ycXFGVIcNHng U3y8|.w$?NA%f|o0NN:^cMPAW&LqEfߙÔķyʹ Wku?gA%?Z/ˬ{񛳓m~%iu:HG鷄󿖾A6 O5 }:M_Iߞv[_ jۯ3_?]N&ݾK;@2)W=\߸ !Am:(ʸ~Kw}+OtjFW?e'\Qtx/ e"vI.CP4KndN qY֛nzcA-pc26lN2HSH6d7&=t;by~J ):$/P g~n/_^z3kzEU¬yIadZ-, |0! /× -:Y';>[?Mx NiOb Pk Uػ#(~߆sQQ 䯻JnR~˄7'̏~1'5Ǵ.MsmmGΩ6C$L md^䓁;P?ޅp_>b%.b9ͿA߳ez/*WBFCnAaa4G69̒*ӑnV\*FGJ 6cOw/(B99ڑQ*F|8T|瞥Q\ Zk@_|vsmo3V*T<vW 1` %{n}e4 .gmoCYJ]q&xqfb W2N+RZ0cIngYe4y^D[,@͈fDmHK9#(?,Pm`^QFscLacHbC!H3ȁ!FFZC˿ble,97eR1`CPa7cW.п٢*6UU2(ߪEX!ZJ.U|J=jw%ib Sk3ly?J@)BNT\دswC?(xW:ECo V~w/:4N{ˤ?KfQ&qRh>k>UT|v5ġ+4Tvir֖@ E-%}-ɰgb ݵo훫}pػ ~FEL9QE&ǚ\vkcN^-0b](+I_!5grqR8(Q7Į ue pǩ`nFHd?uk]=qb*-®0(Phf5akC(;^Tە&\hT:$M&;4]Ő? r<) sqoT4`<KL1FXe5Ir׀>5K g3QTAp%ꕛfm"rMd&e/~|f!s=^ER q%D ֚ph- =Xoq1%p mep\o:E𵱞[OQ Bi GV*fT $h\d)ig".3b*;X*{`)1@!r kOW %{+ѭ Ch8Gcb<D#EAК `p9s8̆|8A,XP8؍P:G\Bcs˻oEe~aVincmPX~]ùnm$$FYJT-1ܻpD R  (2qI& _ ԩ)9BVp%5gM֐6[#'\ӝ`vy$yH5<K* |<8=(Z"8.lHeG1Rkv\ʏkb~\vTPi!s)E`lE~Ӹ@9!BLb`ۈRl18X w wuǴh[tVV5%G4>im@&d JqH8CrF4?2ڐUL@( mB4H)QP>tcݡ2S3"*zM¡ט!HOw/m #&G}qon?{6_1a8K|ps8mдgqqZzPN+iAR-'v*Y;]%gÙdwR$u `smIvRjP5u@Xffo ٚ"^J$@"YqHDDuxju\ F] JPKiyYLHW{Ԣ [ԪD}kD} *M7n6fpW*K VF#job37U}4hݖzK EB֬O~YSĽ]j*Tc ;c)e."]:RW:wQniDz.bc֞3/L)L)CLkYG(%$=?~a?Rںu] F]~%L %Dհ=G*A-pvZ5ڤ]3~$Ĉ瘩.5Pl&f3.a9Bm_K˞Mؑ.W; GG>̄D+_$ }_RibFg Î Ja9aeC"Y*&p|ؓ"T!a>(OQA+&P;ۉ8BcZVf)ce(vQ`%Z90}s[=t8fK8ntɺZ3Y3T|"/䏧7нJFiq1q\Y*tPg) s{yo'9|>=Ax'It5m; Ft]%bs F9WCJNѤr7#Bp?F>&VX 9 fc+#YA\ӶfG,JTɩW-N_A wlT]DZd۟&UsjլX]A7\X   5%BkFd;D*V[6Q|u j,a+#ɕ<]Y 'dz]%NNm.,MeT/!ӋPsC \s6}(Ls:h?]\~윚OMt[|ta:a<<]v[pc>D('eGÉJXd%ycC[~e~czy~vҒ?ffjF\I9Oբ*C-j lΌ) I^&?o)H&:>3KϺ^4% x?1 y]a.ȋm`kWzj="8`B?\.itAr?U9<yz<ױ)w+_kx.䐢=r*Di ;wFUx0f ק886T(~ :rgWB_jdt))ivLM>tۂrfbS/h0XoeSTq!j:{US\Hk {g%1vN,߅Y7Ͻ ms3e0һ&Úd*.xuvS>PZ3I1˝f ,:3^]j]-T .e-ط/ LqԌܽx0%+,I$G*JF[ܕG)Kd)WdrSt]X;qggyְcLnޭ˺xB%~{Y Kgł|7l0?OsYsr~)#-],//SB:V61Aԙ}%O42ȘK#c.2&{GxT"()v$Lc"LI"2X@|)_υNzl NՎ1M: 03ֹN!QP'G(æCaä(ӉS,sZo:4K!] hHy )?<\A2ir BCKI*B"Jk* OUNU; Ӫ#{\0T ?tS Ɛ$6kV|=`c @GzA pA+ȥ5\zbWvzh. 5ǐ 05S{~ LQ68/C"cD,A(vB`!1'<](bBCu"G4y!P|?@ВZjD#"Vu90&8iFbۺlt|MO`"2逸5V,t ;^` F=WrV Wxq,Th$BDŽxH2.`G<T+_U]+ |6&ZIwi~Ȥkf&cֺLSѷܷxf1pƑA^$E["0' @(Qq2p> @;>0tB}/M&/SË>3v:B{*:Y<ݹ9h`J|o t/s! ;/ PͶu0cZV|?BlU E,HTUfx\*2"srT0|?et!8)1|LpSq%0`:G&|FR"@] z#IB^t )m'4/B܉F} R)M>@a''H?X_Ån`dw=ɵ?&oT( ; J:u6W[BӍ W(tJwFA]GvRN X$7~SO4 sN+pl]2\qL7{Q`>ܤ;W Xಹx D jqqs8n3)`tehQgGr1DLY4U]&Pn@>?lVi+,$-0.D:8~'de{%niEH\ ŝBR<=$M%EXa0R^^Ck#˩ Y!{T G4Khtt^p  =dqW츗l C8hI_{4Ch2t=q]m65D.zt-Km$djr+OL:@0]֞C-,I0r;6 fۍ[ 3΃tg־$rGld-OW\KxSrLu2>;x ?bվU'yH` .2'(_zx1fdfw}-%(>a>I?Z+ 0wgڈ6~7[(()rjRz˘'vv g( IG&rndz\N^.Cu4r fW{4$`~PSͻ/h΢k0F,"Ö໣ׇ1y^aͽ * ȷ;5vܫܡ,BjƋx ]"p /A32W`uC@#"(&'zC`/0TQEyjel~XV8']Xb-;²-oN!`P U;cY,ٶz7M.Jw/#^j'xaͮ߶ӓq/uή{< O61Orgvw--_iS36C¤,[?[j-w00FGWɛ۷(3+(5 -uZqDփ%.cuL)u+\Q76).x.XnNjs .f[r0 UE(xz `r4t2…WzaL#a%R߁}aOQ- | h踟ru|sDn`+UGsyx_$F[@)ln,T :w<g_,Q qy6d9˒_Ç̳[YXY-&Zdp`5wVr'VK7a uU^)0Q9I7 -\f* ~i7WYPdd, ڞ)BϺD .\ G?K@9;?ݹ0bU`z U0Ln.>m\mf?+W̸ ~ c{U ɭ|T޸R<>sqQyV7՘) 'ˡ+`rn@)1{3'9( 섎;Tbkti9x5S˟9ab՛gkϚ>;ϰ3|:T(NN:${?wE=yºIn+e!y k^} !-?۫i Y{89w*6~5ހ&e~9ib-Tv)v т `;{ؙ$RCg{i{i{i{'lTdΰSq(J<|܁q90BZ(&GWҙU;ߘ% [wP1(tĥג_D<޵+"e1+" L`q 6gf^v`hDZ=bVK!HX&UXUqIbRk'Sf'q$3j "-wAu|= ]7[cg ∐[R:G37tO}oowˇ .֭&I%:Yd࿐v4H*Y坋ګ<^Q _߽47..!k4)ZAp)6,:K%J7y\5'\$yV^ ɢLڄ .rHBK’‚@ h߯\J;`A('%Q#^TK6PpIhDtb6h dBDpmĵ4YVRDSa]UPN~Sm1)jXژ3 )ZJ}dΰVA^nAu9dy/"Y2Z͟m *%Vttu&iݵ&11#p?zɥR&ȓ ḘN,*h+qfHPq?V\icʹ + y,8VYoMʛfJ"W"j!>Fp&ǑRjΛЋyABoK(?aP|=ٻc sJSUW|`$KQfK>`;5`xP">bFq$BHHJ" M)B@BݏU xuE_4 z:YxL'k` QT2.Hk| 8x€;ʭ//(%A471H]l#?!)ʂR s:v:]u[ ;d%1&eQ (Qb^TXeHHTvQ mf-k؆o,iWީc y/]KΫ/BIon=ݨhtS3]Əw_a2AC[ wvmwQY0ocȐ ):|槓ۻYI)AcMeBxCFZR 駶eqwC2\珿؏WEE3Bw]K#UŘn_Pyk< 3^w~9v}яW'׋oXٿo$lVRZOFj=?;>ԌPNǁiвmT†HONF'TRT]s6𴎇?kAI6fdce =CPfH[=Gm$G{9MBƴ5FVYsjUEq՞Ǫî <]C\>zt]%%ZUٕy HBۃpPojn]zPv g5&Ap~[ITG(! .VƦq K%JTɴ|x<Լyzۛ m81JȄxMb 7םY.^9d@A!~O|.@awt[ )\pzxK nV,ӄJ9^KH )Pp^o*fU=['zn.'5n\']ԍmf)t<˺Ljfb.jAY\E[uLo?̿XQix*l^j.8'+a/vn 6sNҼ ?bmgƁq8M"2Q*yڎeI=0|}t]KE;3 Gykj pEdǕLN_5X=ҥ _Sza_9,s?eaaBќ⼦sׄ!攇 3~4yIDlٮ Z0; ,71`)1EjsZH2Z9'sВ%ĀCw&%œsRJP$J筴SNiP ZRݵ_^P"$ ` ` uRfXLQ(q_8!ħ&T / @Dʖmj%7қ 6vrm_gNM3 Zra, Dzsmfza}e+=<8 Z~:[{';/^aWrUg\|^myCk$o[|/^s黙O9.GGCD0cys!Q#' ,Nׁ Wq#4 \[ 84Ġ(9,OQ q^jAF GZ|ر(%;-+i%yŘԎU]m*&qSJ >)@:*VVz4%,xd4xhngnBm;e4ROͥ^dbeOX`KXLr볪#f6;-~ep=hg8I{`$APN{9NMWFӔfc|26+fSaϭS2I@H:ʡAIJ8ҷe4yuqa=g4Eaa0p}/Vz1ˀ6&eWghl O ~/5%{#Qw)vD^xQI*ܬ%<$ơ%#GkrNt2U.85d$}}j9&ד9\{dКe?'wonB|M@}.1Pd:Ą@\΃U koRqnLO"ruGo?<>@#_US?Po_n1UQ?{O۶_hoK#K/r^UF;Cj6IINHg3gfc>~>GA{5N3xBoG/'#$; eގ{=h\';fH\>/,5iROb gڛ}NɸS.XzIڗrIY0f7 @"02m Vh:ނRD 踮S1,8hn^n<@0Mb3}HF\(Z5 S̨' 1Q@h^*>8gО9H{caqZcئ61kyrSo|>iEcE?Q֥XdAz},~{nۡ2 apA./J!e6cи09!X%q(**#aeҷXQjʻ:CH a42TSɱ#8[@bj,/nzS+ڧ!W`hڷ|dzh%,9a $ xnÍy>POoCxϕ(gE5E1TI|lFgkN>i ??.JͶGCQ\[-+Қ70(FtO 3;[.Jhu!Bgl@!`nG1Q$0r$sQĸp2ZPac8 '쀶-tfzMCvj|٭Ԉa L_^.8'I erY:B*< T\k|,Ty!ɤ`B. 5if%;[>N[!O#S†-ՌJݔa+8}2q% `4Rm( )0Zj)`"}A0.aL~,'XSHMEplǕf~zlF@ 0o]hXF`h܌Fu~>-%%;x֪`![jC#]lC6LUs_&Ǣw7wܙvĂ-S1jgri!;6ιߌUb~%~z51VkѰߝdA Q՞>>uqk:=>K}Pg;:m!\B0!0dYf(P3KKq*i)UjHݪM!]V[N4JD/àm:Jl_0D&"A}K֦Y&0t"H'A]߶88ho|_ ˗o޾]~xUG ~j_?iB˖qR4N髫Vyѻ/:(Duah5P2y)ە;L'7g3$soNk7h8v?kQ79K6q4J:qiڝi_.}7Y{g<;nlZF/@mRf hG!H6JBaN >ϡ)D][ً{ՐW`~S4=غG5jժH_H^%51@̯Gɗ~~{|ttw;W13UOvdn' x3tMc[25hv۽?~LIs>Y=Su;rװB~2u3o g;WN;ںܙq()üҍf`vPSK돋ׯkoo`3mLkgu7fIQy;]`ί~)٧x =_&.:/tҡT&}<Is0=ۿ .jOz1\ a޷Dw'B`rN~Ԟ$ Zk#8wcHǓS:PR =d̤ԖoN%L'^zٕs.1ώOIә|Dž M.`suxz-_;dzGN[ 3"DU{89& m1.:bz_ifun3.8gFj 1ӷav!QAz>X0MRĞ5IT#CPX!Ixr}8%0B{8f\wd 1BJƊC)@5՟q,\&9 Djl` a kJkb e؅0P DC$LjTTI-<I&"ţeS$I!bɜE`i60E2(-@XA As4k4V5W[AM,4/Tpj`Ysa*[h֊8 j;vc',Gۑ E&dc`(AqVI#+~a0 ^z @-$;F%8@F&T E>q1FA eA<ݡu#hݠև>y:u9:ڎ r-8+10S|w: VP qB$\jc$A@D9y0h 96wDG 9Sy&iAD= 'ϖ'ٲ1rl9y<[N-'ϖgɳrl9y<ϖ}/zlyz-4MBD)+k~q{bے`$:}AEN&aDg*rP&X*\&w؞[\{t9B!HDbD0bLcbӡ B9f6TBjc "JͳE {QP܋^rf@QL0} Lf)17}"[F(P"ِȀJ!Gh]U-H TTl!+5И/A]1_ &꠭0x"W IDP6H[h&GmD,(i(2~ZK>r KxiSIɻS@5p3XN R?Wx-*X@Sa^g٘hb$D `numn-]>rZ`ɄTTTVYS˩fS#M9(~JeY-&YT,o*2CYf?^en|IsK[(6Ufܒ^۴þ% vKpڎ{8@>W>.D*tQ4օ!V 7"Hh'X!$\!pA= < _8(sJtȒROh8Dc4SݡH?a` VB G$e84&RcQ`3ILX@-ʜJUmkkMmIQ2WߕDͶureDdbj0kg'ޫ(u(JwF4&;0ˋLn_>f,*Q)]܂UJ<: x ?7SS'1/\0aC_ǥ3YuFuZZdڍ28#!p-"S;1{4V$8v+J2u-9NGa K$9a{q5!!&I*,7(gk1gF`*C+^ T\(('ޣٸ;b ݎw3u*ԧsO(7'ɿAjȅ Bt!D .u,t)F׍Gȴ"GΨʮG>\]^<~F%D`"g|mn؎ox1J] ,OcNis JmXŹ+46lhi5J bG( ,$tq?-brqp7Nf?srrofK}眣Lzas>'"bulcvҀ屉h8lf`}h=%GB'd&rxdޜ&!J|KЉ(71pN$pS -HP$cT\Z F$f2š\Y -u 4:\Q@IH[1jb \},Ra+BXCؕ1*dN(5S,A`@<JqXyĨ #F>MViKrҖ45L cӭ4'z2-|{ $Q\w$V4_UQη"ۜm$[>j-W>Pf{ۚs1U9vٶjl[ClK[3A>+N;׃]Yo$7+~Y,Y͠[^ vvm3/30GvVKZ ߇*ITUda]Y_|%1Ƌ xqFQ!#``RAo5{P| XJan@13<ѿ=A]1ydw- CyS_a2`2\J!dDŦ T /^P@E%hT՜j0Fĵ-kTcvclNt{ҷ@ʸ?וq8=!lάyW =,^$ȅr=cH+P4G*Ѿ;qzhI5ޯCM`h_W[_IŲ_ũ ~4Nv2}'7]e.d!sЮzsz#*Q6/>M k&PG%Th3A- 4.4jkз'x-huѠ0T4 f0$(ȑfрQ͹d:4h / 7 0n55V;f51oN6&!=or@ٶO Xh{#;V Ţ(KkNZG~\/cîRHTe) 6*D"OAzZ洙٫QQզw JT3`U&lK2[9a#gZ"*cӑr<}#u`*T%+&4U1KnawCiFlLnQO3琤JǥF鰰MTƞhS{(&8 PDaYBN6 9cx$>Ӽ&0+'߆qY꓃?<~:0I˼"sXwZЦt ^CF[ՕJ ~9)jQH!iCjjgE\ZGC5j9] jXp;۵I"KXRy ˍmjKQgi8XSNk#-FZ U5fvō3=Rk`޴{i]AZ*euBJS`ewzEnNzaEe+? [̰ZOo/{|[k]Shh_,Ph?n\o_qobl Ng.o7YR?s뇇+}plADJDO/>ɖ4ދw_{>´濾1m×rU![_U5{|BH۸3Q+)sEAXqUs>DpF+8Zj92I B QۧHZ,)eTS:_S W/\wg_vϐݫmZú|c*1UNK $`Ɣ@G[zFpfx93# {陕&nT|I Xgހa͐g-7N辬AZ12AKAUs#x@1S@3Vp"Gb=蓩W?R%HTc)R1(DU|,&q4b!JC}yFTi.L祭ę#n*R1_Ytc눉RG] "dlʎoOT=PP!O8IU~,or0 \b%fUɴ9yb68݄.'l;k @ź_ŇڸE\ W\`pS .Mp#*̈́0D@a~vp[tu`a 0A`kjh8ljፖ0WX_W~&)ؕN!T:dmDҚR Սv {bwU \*J 0>\ۼ$|>iDPHCԬʱR0w-VA!y~45;xe-Vl]5q~$jYA8ɩ ['Q'@=,A$ h XU5Djouji7-ִqX$1V:m4g G00o&TNjsQScq OZV7L#_@# ˸_^b+W9E󎀅T4njf!W$MX$th hIDɬ诐H l֒!kmZd)~x@VjdG7}{]x*7u{c7׷[_Uuwliw5 Ѓ~LA(ݭE]h/$[1kȁh\U)7?l%RS&[_6l6$]Sn-Fr4@Gӿ3"ٓBh^*쵕nLJA[ P:n3 FS9N>q`((ʋd;ZO v\:bm"W)2p0"5Ԇct [i,43+DcIfk¶<"NZΑK8d,>wv/}yo/ӡOί['?R.|&eS-'"`D#h ^hw^.<ʧf7Gkh1~B'mAZܯo`LelofIZgi0` lEsnĠb hkJ4LQM0EL#9v 8@_'ÇbL`1<`X x.4`maĥgRoxn .*(n2W@cUAY^>sq7$ ԬR*i\kM5J4547J>a>AQd:=blVRٯ>m}HtFA!x:OQJ'cCsVrۛ[#ʝ G_H0v ) hf|\:0 %D=5z>|[j@A [q3jOw*5F;&?O"&Kb†ާzg!2e@K/1B(Q1IU LEmgcg,Ĕ#FUL [Z*qFʎD Hk2r}kfQP{A!!g]mg$[1k%SBmI7' $A m j_#6ќ,`fՂ)h\"<䧴AO9 ` O +vXvQF`/tϚ'I" #z2ムA?9 B`={{W1IďeFKTȊo)/&9A-S5h,@@v/&%h"]S^q`t$\|sY{0A(mu\*99^~:23_r<Ҁ|-8QP$3 XzEۖL@#:'^oL*}vW/_=o7!Kܴش6rX4ldIՑ5#GȖGVa!{|jD<x#n~n4vH1MiicoaOfB]J83%%T"8 )133I'I6~ yA_; &YږݨP1Mh'p?)ggjnޜK9BݺܛSi@BGĴ3)xiA `iQ|Fge  L8P^lINDuUPV3*3m݆j1mƯȴ"1՜i=yuΔ3@yQrNlA5˞UN6g6_]~CjjUVI' qe*qL8iCSR7A!L{.3=LbYySI.1i*#JQe3M0Aj:azdWU *̤.9 e9eP!l׌v^^b8|UW%)\zd2hSNe :'*NO~Ԋ 5(b Na"z׃gCHu黈a|3.cxE;ۊBZv0 Fi+la-y룗֯G|ic(/ݵÛj-Pa''W-_k+ԙ]̇.]+d(ϻ&?=.CIF J24eq{' CN:n=ŋvlHQݰ?0Y/ kM7,}o`4.7A`Myc^қh 2(S@6dI)5@Qx5/Vsqw~'2go6؃3|\ nԐPUbzIpL7eCN2K7|3x/nD9عD _ހU( X0]1 >-|}| MXfGAHޟZ'nxf lFgV mu-WSzfE=YcU]vu]/}uƋwaWxEYvq 7 7W\>7]:ow,([/`t~{U| j+h-ѫ7r<] ~@lpCM؝WϺyGtXu'X(:l t|gyX~f :۷^==.`*H:8wőe k pD1P ?g5̤– _NŎLW LO-{w Q90*5$)(1Nلck$͜87_ё6.,>f.> )||(j>5<9d1LY2Y\VXx*}ȼbx%}ᅛw@>T!Xo-i(wwLw|vv̦: !F._X]XݦrD7ژW񰿐cAv7z4yUX hWJ\PM Њ8nKiiB<1\E y6 Ӱ730+LSF]ViEų$ @vtDžJn`!X#0j,Վ6B8&&*s\y7wq{95ܗglh.w0z_U(dx_̵FWZp6Ŷf{lWv e5׍v8;Dr~[L7lt; mma5BʵqXV7tGw1{.Ɇ^B4<~k׃#~ Al-ah^cj2e,YT.A)0BTzJ4rwv(‰~YF+=O> ?\*7V m[9ʻBB주[R]M0TAxzaQ$WmDFѰyiѫ54W||`=a+vK_tCH8!Blr#JL'S(SGWa)])cẢN4a*Ff l zg(J 6n8RgoP@Q`IffȘԁ 6ط$_c- To[3/ XL>~u T3MN`c97.0gM6BH[dΖȽǼǼ.)c{yKzqaky/:FɐL<}p5<0\mmPYEP8MW^9e1K@D  W=1%;Ujڈ~;WMJ͙%a6U+2qős.(ܓA>eŎ2 TPcׂZR$^ R-X``F*PQ UMIj@)sq?#Bn6?̡[R%%N! 6uP+$=6 :F"k){KdHUPUJlB-*B-iUpԩөdkR ja3!fR (҄ }T05Kds5{*+NeRg$%Lcᰳ+-V=n#G"idzEy5w1̺viF ٺ^7Qb զŮ;2TԇCMPԪhQ=@QL,& }d! AԪԀڊR $RQ (.66YBg1h5 N0߁44 DjwYހ)44:0 C| #@NGWJ.;oegbGoAgۛJX^'fMiIt7>wZa_ 4`uQs]uHYh"{:@Ou4Rb#,zWQC2zG/+w 5J{+TFQ1 MŞߒ"Vgc c ^52 ;ohhŞςD}f%e @20;i]'" F%H#w( 3qkV`4j?P9P ZyYBly0}AW[WCSrUWJSrUN [6ǘ !KY+;rSsP(*0 J7HJI1I,H.XA*nmչň -a"Iv&ϨaPFYzP&<S30I˙Ȝpr5cElMJڢ@٠d-)p5Igl"qC91 )i|&0DE4<(5%%$@]ʌexƔG_Os$S.S4J7y.WCL|4yow+:]v2|"U?`*W'P=֨"`Bӌދ w Ԣg.IOM$o|fo]V\~ϡ`<~,?o6d{ggdpz [N\\`5MB3&~=Ɵkro?ξ7lr>t<6`aK͡Zu $i&>x*(- )IE9A>:Et;u,ZBZȜ-jw#Caz7Y9r'rlEQt8G9#sxϑ"V[qϑGPx\8j" II 9(~!mQJ1 ;(:FQ`\]F &RV4rfjQ<۝/ 336!/!HE_FfNJ`5hSDDiiy_v`=syV a!![_2K.=mDvro秿eo,ȼfr_^f/sTw}a"n5G|tvHw cwILIQF'J+Ō#;|֛AT}){C]m op<AHPeEݵ0lBc )M kK] 8xg`$4R:㖀G_ yD2%㖰Vv@l5y PGޯB1\R!;wyM`ȋ|%t(wTnP.Ks"sw͹Ptv<8Cb8#T=alsj {;{k{i!)Ն7.ː@/ʐ{bV;hhToOd}[Z"Kߨ=R5IF l ѻ[ FkMYۓJ@L|!?E J=bR-Z_&|wՓwW/o7Xߗ'{Ovj=z|B(!.zAk|emE{r1g>侑}w?=:+9~Jl4ІlsSqVҲZ LE!BаvFQ᠎u=ll_Ugwtp;/6}ćC8i8 -;gE1Dsb^ǧ*o72YV ȋʼn|iW5Z5pЯϾL6쌃q7 Ӗ t}Wypg%ݟVEȮ]?A!d)ZsۺhDky4U\K7@77}Ᏹ^qYOӷ|[ Wdݺ⠄uv;7ihڭ{2kvC,ډ8[WNwnGzrMuoY?9nTaa cBjˎP[pO2S*o_4/LdBo{Ĝ/6C*mzg]3\ jv*T`P9M0D%\DR*gN(meqi#DT 94&AtZ`D3iݹQϥ~jh/k*Ym[ܖ-ze]P%RkX+Bվ ʀ\piRHiN#tKcp"jOq+i v Li # ݙE;@ɾ)b=!g0ser4ǷҐ۠}A<9c6`å9 0zlCQZ݀ZiNH+Jk͑9ԈюVG}pȟE)èRQ]v |=I^ $WRX5x)ѐ͛a{ Z{JL;ɫ=24vڱx^Ni5'%:^N \~Tv6xs=9vV3I:gD 2.vȓUbPǔC6MZKu8Y6$t"< Q+-N?שʊ(M5MQ?}PocԪ枧+a!w-0oo]_}M%T `g|p -B3iDg vi>`z{_~j,/jڢNu*P_J.J〻L. IbsraBL&iGTi~ыWf<*VR<;OZjѤ-l YeшsC#wE7g%6l`-g~̐,C2PDR!6yp12 <=n[pr,vgmjQ(DL((] U0#h`u3?+Q-~[댲Ƿ Ԑp:}Y"{)BlmSFñS_{'ˮ-{ʸEv09BCmmݓFm:.ĴfByJq(d@~+곽ՙ^,%o_;Ǿ>eac_^|VԒ֓lѩzE9gz^Z^ZGn0ՊPg\}t-_B96fZm=9>ն@~ZI8) ێT'o?::w02 9Qv;)_˞[vG9 ; r3($5ݘ"tT~CAqTw9puө3n#G6@Ҫd0E¼T4#KTX ޙs)P`VX)E_c%`\dΩJt}T*S ],pHJ&bgz{z{(bKbP%UNō2@Kw:^iJ5nYjjOd;ըmE$ԇb!*['0P6R5.Py[kY'."mtkB)\J5Zr1qZ{6"LVz $cȠm-9k*⺝t'6$WP5] 6Xu;ڱ4(RuSQ%OHe>.[l(~Rt=v}(EYfC-ޖ[3^,Cᏺ۬{ Frs'1\Q˩*/H+w0Ih;р}`gp5og{]7$Y[9}{?G+J@8ž&~0JߐsgZUW(.$//gbUPeMLS]#?h箢'2z?{}gLKvvcuqm"S-xUUrB`%(a,? hc̋^z~ܱclX0N ˷ Y@~Q/GL2q2G]BwȍBqS\f""MI ;KZF\`D18;LFE ]iKӉE!Fbb6`blvV;}kܙL*J-ZKV%zrEvT P*#lk_S}J6UFji;GmX;]V[m )z7 Y[-_pJGHbez|t \#tp64 [D-KE@| j4"H0,:({m 1p VP5l1DQL4RRh2ZdOqޡ6v5e9ˠ%q)y0Ͱ%]`L#Z08em7! (Vo7V8RrI`D+=WH˵^ߤbX&Z_iܩ8f\届] 3ZE:XXCpZ2',پ&]ܐ_}/QUk8ō6X>NwE u`s3ob-L}5Ѻy$RᨽJWnOYtM SxE,tڥ hQD\x"sf9x+Z*h{ #(Z@$ىX18O$ 0s6 R ǩ͞6~bMjx3Y+@1,g<~YAX8?J*mt0v'`\l>hWq5ZɐI`^$o RՂ\rF.wd bnaV{ Ȓ׻^q[_.l۫eN7*[Bd e4c!PSc͸)LxW S.M$A{ ;R(EڎF8zQqdaH%`{SBSoyвaD6憩YԈZocoUUx#Cg"2EC Otiɔ 0YH^&M̚ ƺVy0/|*I-Z0ûgG (QL;g5:EН>Q="cC.2U[fB)7+;4.*-F\IB:v36Dl!op4frF=ʠ.B r.S5>ׯCmH1P|j>å&H9-LęqQԋ, \@ 9Ţt7<(jX"A28 vhsލa DGG|4RE>8<ɨ f:HѩcNK.# K"CC#SJ|ovQ2'U* 2!kH]$z|@UjZv7JȦ=g[v[D 9Cw8zk BhWSip&c$%nQUps$1x0?S<, Z  bDwϳs8nH}O6\{r1,\JoAmOw}kW yF-K TDڦ>ߎ.>U?x?qՓZ^t|=\vkGp4{-m;L%+T=(ՔKp+>-E &xqΧKkgn9H7گl^~Gmpx}~kʸ0V&.EmA',ц(KZ( ь`Mmn:9ux7nPH+`\41zg.tڰf9oyn1̭6JȊ֪2–Z;,(>F ʈ̠)f%vy@؋ P%b`2*)Vf*#˰9cb엷 \:Y"0^S( #FuW*"D4]8 yY,1h-=\dr,05Gk`,:cI+ lUJvN@/h FB9ԏs` h0PJ#v#bh YbaDIܫ{3FE(RT{0 fBŔDKnHSAY)G LINU*`@8Rh9!ȍ&44=&m⌢NHs05/:M, v.6" ~vYf%Y7S*}VЯ9fS01Sr6,NETf.6rt*8 S*4(4J C- $NEӘ'*e' w̙):,nk5|S4 Du.ư~fI`5^k:/}*d|g4=&$Ϩg,qf'#6ǡ6$vbK)IIWҰ Jg|% `I0^&='+i|n>ZJƑ2lH'ه7tv.0ư<*ԙ.tv*r-"iiᰦ+is9:g|*ltyKAd8x^w֢3m{M~;ѝOE%+0o={/ sP8;mxA΃{c0{,˵:M$\@!](6.6"!2r*<^!jib=#H4^@2:Ou$`.p`.DAb2+Xp8:6 KD01>S$ZM۽Ty#̉KKƷ5AWOWOK,Hߚ_6CH`9c- 7āM!JBXbUʰԜ9ai\J9} hYA$σ&lpƐ'9ttienq%R^ˁW[!%vZ@ 'b ʙER0Q9*0-¢on9"vNH))F#Im'KcZ掄VpP:^'朶CQWEp']:)4s`GCr䱐>0|褨`N 8U[wR NfLm5?Śʈ= Z[jh,ӡs?w^I,riԦ믪ݼgZOßaj~i ٿ·g /"F!H X4]̎*cm@b:_~ժ+EXEK}Y:8} bZ<4g'`Yp?3gF1k4W `Yt]FA:P^S=X8Js|*:g~}6$m)u^L?~_>|;uc~{>uIb[D꒏3#ZUim"vr)"DuvHxk[4æFRchtr׷xHy>^H?n,y;h-`yG I8@#_om;F=eǀqZapdLBkCB5VmRLd? @LAa?Vb?{-LEƍ@o."q`|n0BT_7Z.S@xJE Q̃%Z^Xؗo?iz.o펊ݟ/==y?O_sY-W3x<?}3x4~`2f#3'<3hՏ t\?)mZo1 eYYb?(Eh,f@)c>>ыLC;챿t.{5*T*Pͻy2fַA>; p\g`~x\)P~`0f 1 {%Y_gZӦ7ٙfG>?ytz=}=?yѷ'G3{1͏B5֙nwal]qˏAL@QηyּK'[6"_?]O}P(pcꕿzT[;V'wwE$Ź>N[󻑽ߎNL -=L٧ .\|>z2W j_s} 3 84_PGzK7Jךг ,&Sgufhl2Jp|ŦK׶ ;uAEGolLaXL^͞^< Ζ=L Ujӫ a~pAi-Gjb5/Gnl4t|69p=zyƓqxS6EtOiҭk n|tۥ}죿櫺LPg ^KOҧ4lL;ΣIf 47 i`OUtc S^wD>¾zQځ}a4[o6'!=o0xt0:y?巟NG/p2,^ 㝦gpRLq-Ѯo|\ER=*_j7MAv<~899^| h7gPV[i>_7XH1X-X^oڮt+sz=YEfgQ}SQ 6u;;>xTdѯF>lיӈ?/CrYzbn%a󐌂2S]`IL J=Ȏ^úaM^n\JVkW3_z%r /[XiFBn$OK)vRʥ.Έ+c ż T2LaOj/tB< "XI$&CFDH3 1")BlmިRB1)zQAX/2cLl59k= `|Lde{ꕃuA##$m\8'?|E]-48ޢ I"1<:@͵oI>|, Uw.||xDG݉a>:$/w2 ocM&:6Ni׋`SpbV)TĊȬ)E͝L蠀{f0Do,YG>(b]͠QeYK qIhj|K*+cU֍k¾l7 {W v0M(5D2^%{sYn2vwۅw΋2C"ZP*+QE.߭&mbh$ 26DپtA օPlAΊiֿg-N9?va8=d!RlSoOhUPē"dDe;"9]uM¤lS crVM(bhK'Wگ ä^ .O4-}zǵ.e||afǶsB`ߎU^" Ɇ2Z2aLWoEjuEycv6oma k,JE+lۥҬH+ԕCR*cb^^X =*d}ój0:ΫOn#6}$LȪ DPu0 ` r Σ7).d ֦5u1yS@Z,V!tdVeNpHh9 ",Á ?+zsc1:2$ }^ VJnH+ΐnzlɃMS fECo핲_#j%'rtφӊZ}g|ըi_ n@o}׌VS '^O8n_{+[UCn(4<53Hn&P5w}@,վn@-;sdz~=>YU2_WQ/{ }i^C`⫺'#2./JB,]xYzO4~>NKj&)IM5 ۳s_!͆m TCMcG'/|`\myã:a{v=NG7 {6uLy~=O[sx0%w?@uzhqfb|A5 A匊VkT%8鲕aE-d.0JPEdeTVltj__ IJ{\ǀGDžFOJ5,:Zra.,l#'|O뜻td0iŗy Z@eTLjS!Cw*1ll0^ޣÂKBZ$Ip}[1QM}du{W}w(49)a@Tp#a+JSDS[4N~~lUqJk#^uʬ~P6ݛC?Fk_ gM y:E4r9zE+}Bk*91dYЁqDz\d#r"~*S1rd&62 (hRe]|AjlG%dEDL #bdZmwAN)YY <!5I r9ѱtdP|*`Č.p  <@ mF95(mR)9$z 5v M̙ǍWAYCwM3G" ^GᙊDoQ*maflskS7;%F˸*rd,) /D*iHq[AڄY>VBO W"hO QF0bhЩO`%phJi4"ҍ۬!L'm+Sh0gt-yJ*˂^A׫z{K>pQV((]CmA2 D28_ʦ.P$KB0|0mѝN]B2e0>ThY)T&,# C1`t6iF*Pncv흟";qOUT /X;𧞂)8&pt.= :e{±%;NmwxwwP(ǴA-^QBUS7Rp$R #2p82ϥBT^I_eKߘ*_c8" :%4ĞVg =JIdY x2;eߏT*yvNn?guN(cJKO|,ⵑGo8eucR)NDK(AZnC@{'pNJ?ȕ>l yN3;ڟNٳFbBh]5(}A(kQSƺKS\@RKFK@vgҶCp}6R;o\vƕC6ŔN"[޾@eXbHv(__t0҄"/zYQbQkk(!8g% /P`pT6DQ{O_qg ˒,{K`rJ}}IG03ϐPN ==$'c4z,q.8lCK4%A@%`BKᡌ(StC,C5;*hdGQ XVĒe8FKR0Gh gSOeN[&u+"hL Ehe'JIOEm B&|2F/A,.ij;Pu:}ZKXM]Ԭ +u&c(]^H*.(am7p4 {"UK] : ؛3Xaihx%:ьZFKc4݂h0RXV("9sXTGKL۲"z` V MG-̶3Yr`9J,f<22Zj9 k1,UHy!1?6\'2&H9]DFwKd(e8V~l&DqSJQ$3%2zDšSj#8 ,GE<@(4bFXIv/@!#Dnϣ]Ě/bt‹/.1A($z;堣ߛ-5pEvrl.zC5d{p1vNbwU%%:zI*Nꚝst;AGy5*egxv_ooma *u e+/QŎ+ujדqdx7l1F C=N*/aߜViW3I1x}# ~,֜ {gNrASEʬY־Z|=z$P`TiaY]Sk])cYTj25w}˔k&uDzmac<LIҟ/cN/Ix!ј*U^uNX/Kv\jWTI }gFbt|ke>CF66*I~% b@ ]bѰuwpV}d;X@L dsNzKG!Ev KQB*BD83":Ggn\UP}$I#_&<Z J[#ҙ[BĖhŐ?|j 0*XFUC }X%S:` ?|>@3l- |(cml}Cu}~hG0wy?ezxf @+ 2L3Sҗ1m!$l\/C- \.@h_+ HJ &Z/CVk#/ SR_Qi]1~P4u/{K0H38iϗ u/!5tt#4%~HI(& |>Cx5ۖSdĜ)JUA 0f90Ta8 N;P(aQP3ZE1TXkD t JK2=<%g,m*QR֐{BÕN ~o?,~r lԈ֝>uţu|kJ\K<ϩ2`E9Or_).Øv`Ea hIZFeWE,Ƶe# R 2/ۂ;˼cr?\FSHqUo'^>e絖Aac}lmRՖyHuR+-\Ws*c?<9,zV1x~ⵦ5[޻zs<3ӳ'BhMM=GmgE: vl'O䝛٬q{}^d9^ĝI7J>P=GWʰuܴ;9noc}-FtdUF _ܜpw { ;i Q?BIW\J_jz|c|OxkS^=MSف^e#w6v{ڻ߼E;׼F}7r~o>o/ֽfBGUdƞ>?.ػɵDBFpw ದ{ҲoDZZnK*MmΪts RӅ={6pd/&V ר[iO[E6oy߭T}`g﫭ۋR8V߭oާoޭ`s r{u/?Kx "پvֳfvI~1]}lf|{wNvFZ2Ww'7O': r8T$ZJby|SeLYoסݹf<;w9y:6FAgz߹ gIo΅}ڍvL>^37.0^mYKIp+~2RYqA/g^9 ߶?ax4H;f:J[Ӛ?ѣym.7dh`sgz0UlOle_\~kmԛo^tp9t.;]=A{>8͡%π٧^r jr Ï{ zJҡ~n)p YG? z~PgT>I4lH>ڹƼ~{p}^?16x|x;|z dL&l2])t!?/zbVJ&D\`4Aa\RZcd0ŜF1 IqTD4Rr#ˆyȍ֠A[,뭀 A V͒%j9%5LRxzdYsK,\gH2)%-u!&ܸ-Wߪd9RԘaUE4@A}Y@kZTznqBX_עz;͑(E|zFHJ3FwmHh=ݙ (2NOЃtg> ;#G/OQYk^u#K֓d9Kb6k-j<_J'!ս(PT3eU-ܙTvi!ErDe\HⴻvKٗ$0$R`[|uVAZbh O~%'GF5TxL%x|sn {{&ŵw._{WE;\mQŽ,!LAn Up:?CݼYJwoVJ Yۮ#;Pgll/(gJ2q(K$g8ybƀ'=A!lǖףU֠`ꐞע?O]>?t+[ne2Ƥ}7N/*fy>ބeG7E-Pd!j= Ž iU5住I';QHg/O_{r=IYt'WO}ȰZ%( D/7@]gS? L&eYg9Eջ&Cw~KyZ#7, a{;/+dZNwxCu5a/Ka臈ɪ׮|._BF_M;wݶmw^0-~`u/-+y2f єIK1S"#/VEsE{-vڼ7jTqS$>u\EYz% HC1JDX>3=ZR2MnP~sOb>Z)]17ܡ[)0ftj)Ħ(0ަ@B1Gdž%RZ2pxaDNRdz_Y քܐvF罳pj)'1UJF4|>#HȡL 9#G~ۏJ 7ZX?4{@HS1R`ldݦ". F@RUᚬX-g`Yvǧ2*ԥwӣO+ M b[%q@Qٽ֐ O+ f6!I6$Sg9 }l+ '*=Rv&F'kQ+YXb#rJmKm!zk,I+l5yrz1.x?Vfٞ":{'W%!?p/OХ.M{m1lCԝ61q%XQDqC*$֙PIS YcSSP"ďƐ$"Ä4 Sxx~=η㯩T/6YfɑYҧ{8NY$k!i6QyΒcHl]aZdZ$ EH::%[)y "t@ȐO^jbV9Fr숼r1xr&Y ?u(/ ,(6?Ky?W,^^mfr°-S,jwgJfdA4?3oO9U 9KܝLs/I:,c󔗏]əs-_go=u2Șy$"J'ue`8 {S o?9bXҋ?N8ݿ̾~7sڰϯm{>#|]kqnƚ?ψW?:i 3|̳|+d&ݒ/;4ro1<=ü?0Z_Ñ|xtq4pƺeRi_l`xN'l`+1ɧ3O:|%Qom^ڪ;S,xi˿߿V!daw>o泟??دa_c#`_?LgO$;&㷟HHHOdu},Ca !}?7o>?$QuDmIsB.Ctm&:t :Ր('Hª{ ɔ SBXAȜGTA }P2N!׋N|2}~NW[}kӁI3۟ZܔSX:APc T19g^"\"*!d(J֕:`ȀN";B[\j(P/6"@ZaPbd``]n=e/!M(J A !,`@ 2:Ǔ" L4(zm'@$/Ͱ5K=lUh)vNy!N SnxBWw53ke G1ZadvڟlDp ʈFi])OAdLđWVz5"T&sp84p x-' D%ML9~;b r7\.a?"Yim/J< MH3 Q Mtp-wl}sp-]bŬ;*QJ`θ2-_JB؍7.L<$Z"YB*pJD]U&mc^{ CSkU;jƽU[؂{+Aks{;ZlշW0Q8đcɝz- 8Tf0DF?K{Iah2DR" xj" RY0n|rK36YcKwbzAɖ5`,RKmr;At*9r` lF*JrI#B!iK "|=T `N <:h48AFJZ{pt6:B YEXd^.yAIE㩸RG!4!0 pJ|ɨ#bBf{a{IV_`X`0TT:1,\OMtJ Tθ3u*YRc)WU׈NRSx6SPdd]%\kp0ޤj1)bc Htf쫓HT"eg-: h0 ^lC%bQ{V$c2bŀQ$\aldfܲ ̷@o>k-cJiPS N8,lWD(>;_A !*nj_R!pd~G64N}A]漾hGStNen܁dZ]˫{Skz[ZZhZ 椵 -ZmZqibs*܄yTh5gmh)jNҏ7mhjNТe36{Th5o4qY}6YؼІǥ hj1~r]VuJ9Dq 22*W.WAtD^S5A'N4,m9N̖@wj.h6wAԹ2 i&{:4tVZ8ʥdk-]lx9[ސ@,NXٸLQe.CԘt )'ǝlSq{/6.-gz#7z}u9ɺutQ?uRCe dYeu缐g%CЧI9vV)nMsH#GGkNE!L>m2ho>I4{EE؍1Z8Ҫl8[ ߨUM;MG7Q&nTi\݂Cz̢ $PK}F`Ҧ[cvE'bhioIK|4H餵Ge^o_Z؆h{u!ZRFW =>}um#$‚ 8^{%i9^zyz8K7ξ-7 \#~ȵ, 7V'.A2TQtѸEftqo =I5=O2Xw2"e W< 8lq-&+ [XA`? 3V'+ 2l`&ڎ!ޝ_ybcDOXCF CJ㻷tg;1*H_~ F65㰉@+Mne-,*r]$$E.DvL$b˧HHH$M`/$SsRYbI^$aן!hA)FGWt!dzȑ_)0i> i ;;ݝiJU#ߠRGVQɼ W7PSL`0#"Q6Kt%:dt|sqp|Kl%6rl|3Q?\Co!24#VI٫_Gk_A̚`L@_BBH %">lD|u8D_"w*3Asu'bİ*'+)QCn@N'N^<ghx,PR܈_cSFnsS *;C+޵֖t=^tD!. M3$^aM\Ns pHc ^K+@vCwU/}27A.֝Xϗf%y[Ϳ,%"GƄT w|%Ʉ\㕻jgl֞,]Kow<_t24SApp{*q͖ w7WONXIC&'ьKMr'I+.*8LKtBN(ӝaTۮsvVˏsp?Z~.58 4q}C5F5{+՞>ǛriVy{杽 s~X} xo"b8r*L SCnFwM,Js[h賽%aHHX Ӹ(8]ojsbh!-*"Q@Sl 9G njg DBvŨPL'^Nբ՟7n7~{u{޲f?/J߱ƾI?'\MgpA`׳>M š]{Ka1pL?qBZ]qCU<Яǁ-ʶG# UHD"ij% #f mF"MDHp?.L I_k5r-pƮ|sG <ilP#|~sDgcUƇ`5UoM>\#I.]}|/,;FNJɔ˨< QSsEQm{"ʥpg/vS՟Y'0U]Gj.h0W{6߱Ȳ 2!jK9so- oxǤ!glȷW1tU9I-ܻLݻ)"-z~ s~Տq)|1եyAQ $LN߿SrjA"* 95{ak|0Sw'&LI*f. 6R7H#-;d=HF}܃Hw0tC,khb"/0;xhp} )&3{'9}vp!eņ;8.kgJ4};f=hfv)PTЇh8eD7OpT K8r)kC@4*紕4DIyKzEk%ëչ^>T!vW48>m=--n?r Dȅ$[^kRKSGr<Ὃֱh1 RI,}%#@!a\< .4;W"]OM=ni=VAꔾ6C``֭x֭ UHږUpV )LXRqTZj ̴J;&zC:/1rZv[!ѷxC, ~Z~[&,k%od1nk33\n^Pr"WH(|vWpP.=V9"CzĊ[L*W?j?BlA,U{S/__qkJ G2)є_ ;mTX2X4 ̣KLa2g 1ڝE?fW%ܨq &h "V _1$U!XHB>p=}a Maau_Z+^,_ 98ab:9? J{s&<1~wE~s:h1O<C[KQbAx:-|<7mJ J_ݧe~`4!8*E_GoZjc_Oj~yS&@=NcX83U2QIXk}WU@2Bedǁ\kwLY%re>:`H @^V GR *q]56j=^ >\mt s)C@;[7;Yk}H=A2ň4Me XYԈZbk@=^ @'>'4wBs4 ;Xxqk_=„T >܂wY>pO .GX̰RZOLZgݓӃK|7F(A¿.%yzn Q0ra:҅fVDŴ"g]9oHt(@{fASRz.4^tCϕA { Z+8Rp^AlD^ř}BB}e`6SP+&xic& -דA{uZ+8_T dD˺ႿO0-j'7aY?\/azdu=Wn^9h[f޴m5SoUĕ&ru Ut\Hu]}sI8(&Fqbp+Q)0g0I`K78J^zzZ4VK\ -Fk e4sHeFsl9l U"yc<) l`2`b 31!T3g1c|]&&1al'XߢX-,TlwYۊ&td:5fu\ "46aCN HxhYK# #%:2E)k a㘶XX%RKߨ-;O^!$SSZN3>àTDuLe9aN 4;:i4.a LP+-bej hN[?.ђϮĥ:.W⑑GN,aXc&(4]sl5@2=lD qP  )姦c0}^ %G #Y(h5 mD49FCRZ>uy}q+]_Qj%nU9nUGa[+(;GaQB2=mL֌DfKlɃOz00shÒP<44hxu3M&x" MK h NG<8;.CjN! ̀`ŝn%wӅ>Hl @~-Oo>詄u˔2Q.؊ze҂#j(;*TmB9kB`W& YN&U ƁŖHqKhLaA-v:H`#\ B}j=Q.(\_fFgO/J߱'}wa j"RkZ-}n\k.WrpL?@Uw`ֳ(x?K0T0AK:p{_ƽ?= }I)VO0H}DXxߣ/^ZFݏqn6iv]Yꔒt5GumFM@WXŐ;.%Rx\: 3kGۻ[0LCŃ.n-SV@q2)RN0'2~htl\?} .وMb(K48 `w%3Xq%#cEꅽ2ɐvfj7R`$xpMr6FR@yJMBE*%픲2 ;[J 03h}X?}rfnoEx`5Be*FX1=tL>OvKIRgrySdxK:"}B>܈}^S+[hZ8.5ql1qe\,b`CEdW930,rsZue,*d^zi GF]r+Z'K}+L \Y2ZM~l"PVS $] e9ϒ4/724pΙsAIxxa<тx%%1 :w Ms/3'HF_snMrq_,~(Zndxcc_r>}͸h&CAG&{s_zw?<1Du1|h9㫧 5 U'7WZΙ= 0ׯ^|ޑr{LUr#:lEr΍e#Grb*e oFx"d=z}l~+jorIpcol~j>ul>,̡;Rӹظk/}+_O}ͻ8+HgP0vi(:ʭ;і1h=jl6U[Q0PrFaVn`>y65鞾 w{z؏$\$#VUa 8ºkupO0im֙m'BUfB]x}}G?w G#9'Qڹ ;}4F"2:Z8< `4}_IJ|8MAW/USԟuM`qH(sU qJڳIFqK# 0R(|2%epN1zMt"ʋ\Mp.!f3*S. ՜8Ώטu7>`w\ o?o>s± \g~".'ᇃ G)Cµ-j좢EL4dT̙J/ !2/lJXƘ2*a+c-7[h=bU 82KrZ# j`qay`DL'ڄ2 nSU { KV sݍH}8 Pb28hsZG-oo5ll*KXJ- H<$%0"pF`@՝zM2eL+B풙bɌ@r =<$SIZK!^P { @cd#kkn֗8{QgYmU v&$ ơ U(3]vRNHl"q/'L*n(!7mk鹖bUP|n#mɩ3k'C0&*%傐mtY)&8#M.'/JBKĒER4X{jAݸV3r#&yp`U‚I0cd(s?TNF,Ĉ UZP[g |V}LQ4͘օJVLeOv_~^_ACgqxwym]Qvˇsj76Gy=? uy=z;ʗd.!!!;ʸmor 0bU K۟݁} ~ ^'ub="Z=ńk\i F1A!ZnLViRzÃ)dGOnv6Ï?_ۻ )X_XvUue]wAޯz7ڛC3dd5{25q 5򯊨ҨowL_W^c#ԆEAq %I\h@!7'098"C&wZqo]X!ǝVcƲ`4&Db4VcxrsLg1"tJ%1k\6 A ӎ-$DT'وczF#f'>#)U3gdE͓P?H 'Ѳ6x*#n)\&ކaianD@\j 'pY b<SG֎TȵT5IJt0$6&AVB I3!4.y`5$~rFh/@+$Ɖ/8UĻ0KX A:STX7%/ghcbcYK1 yj!!I[#ƛuKnV+Q ([o;U'ޢB>s`]NOH-4Q"w#`)V۫5wϒH~m!m:1xw팷Kt#nbiAiّ ϊhϬ EsRD[@,EEٜVC=1GJd>EF`Q|L9܂| j u.w %$ uwZ$W5gr'oֆTN?p$Ex'!NYE- e7d❓NJS [8: !.ʸΎ`Sܶl<-Y逓6ũ?q쫓Z_'(#y3Ӄi 焙M(Eb;q*]]?{ɍA}w m\r}F=#i[33&kRlWbU4)-"[\0J V'HĦ{wt6:rkosYΙ`>HBZvt:{%<n]л 2(r+{9kSI6H?~en95{}w=1_]t)w&/I~ ᫋T/z5LWE?oGᗇ>…g~փ܇ ?9wm1Oi||'B*/ٕ%*eAC5;"k֒ԉ/;X+It7Z)-UXo8U6vƂP_,MD/=_Җ[2`DcHdC= z6m}Qe7\$`"}Iץ_ߊ(;X zW(b]M`M, dLBŗ `D+ji NEyG/R.I)kI#uSV3esv)_HycM;#rQyg(6^Ŷ{~$ms&Q֤,1}բx|s3^?DlXo2jfw'>p10"k\5Ŭ7^,2加,r,E"i,>t< ?bDkdxi ĤG1؍|9 dLq E!%6X G9ZACbupT&Kn~eɡXbǸI ʑ+nBEaj+qu5zΒ]-t*~~4{O,9d |X3"G+ tu1e޲d2b2uFLBF+҈bRIfբ2@S 2@-"W{,G4䌵we.}X+=tGĶ67g&moꂡH n M=?V^'S/kpZSHjU;#cbez+T.xP?-Qɶg<*GJ׫Zwovպ'|YsmTպ)RWmG?oާM9sv+9NU`~JI'&:srث􏧊o]H^s<~nFa1KihFʣ$7?7Wӧs6%71p\g":4>cQsfXbB1Xz)!LKHjg%6g oJLY<`F$ux Ḛ́gПX$DD9<…7ؤtN{$1 ɢd73W˥)U,$Y- wZsީdxYMD<<}$ ,]T* I&}i$ϙak-^hjbeWjXQppyp& GQ{rŤ@[?̴'E:hFiQL}lFGBrj͝_@\(% sU{( (aeY{ g@B䔵 R5@d7Q<^?|J{HK@2q,7zh jb+U\^J ҝ5RcWR_J..7ّpwswsuCҗ_p)M~cLV4T4 ZVli6 hjۈS^ߺt@Ǟ( wQth$E$sChlao[uN7Z= ,~27lK:`FːQh YTFd}IL v97jo妙jbc&p<}2~,`$?hOrd 㒣A~%S1 )ZPX oHE1l=Lf( N9.J\&UEqP kH1^E1dYsN;(H"+Vk~Ȫ֞3˃lƪ'4ՔE^ȓ59(]4(>NO^Z4W|xvT0'Ζ5XA&J}^_l[IVK-U'{;cZ Ybݔ2uS$J1]rƇ%UP>)jxV: J78q jU*r4:"rDbJoWmR! lPAc0;7`Gr&A(ICfҭ)KNd$V?CI`avl**'|,ُ~ٱՕ˨Kh: ^U<3{U7T2t+6.OP;|8of_G*}ꮐ,Du@)MB=\z` 4q5k?*&2H AE4I :MB|'$TInk,K[L$4M#xȬ $.J_p!R%ygӕ˅kG=GN^BX's7oxR12}VbkƲhuQ ()RZ碲NL g_n2l~(-'rfRxkS> R"AHȅ qeԬφЄa.HS^$dPA+92 Oz%\ZK ֘wI聱8)c$D~RsUżKեKed@` &;|}~Y@3 7}i 2> ^hCRDp]4DIY u o'nA$Nm1WD;z)8z 83,hC;> ǖf;{/c'V噵2)P2Qjhpj)CRlbI(EEua"W9N?_1p0 =P eOin%Xv\Ndv8¬s0+-`<#B褌\ )ׂ7m4i7's$E4^XB.EDhA-c_镯,NZ6sYP)@5A@4}c+4T$F$q&Y!2JR4 Cl)bTie( 3u$:Wu2us/_zg*M_J4 XSij\ˎ %9Ycq=cR*uyGS2N,Bz#(ͣVED`,0b 1H+ݜ Ɉ$,ϻ;؁&$lYk[|.1XkiovGoaɂnǰ$E<씐W~'[I )LޛХ{7V[|?͗ 7= laszdXA$*pD%m7 :FoW AENm"<0{x7LIVY\*Lp];miA9x!!Ǚxr; ͠}T>Y<3c 0} \&zs~?tj+hPG!H(HIᛌT.cZ+tBqȽ!YIq0@Q!`fZ+!IKBHGqB{+}t4"8 ~ՄrkkrՁjs*u:489"}qr~kP3oB SIUFL][F[і:|H؂f΄xuHkUTݖ_E%- 7-'DoQ,OY0mZ^w"?{O6n_a֯JNl2mueI+JIArBpfbp| r0h-^mM>joZͬbfVƏYۯآbioh,DU/^ra¾d,ʨjBZMأtM„Q&2f1bjfU l1UQĄH!!BU@%!iKssUķނK--(\k"k:OZ)F&ᖢCٖ*¬ v)z^BkI=[ߔi=Y,-׻79F\{Fd/ V9$܋5\3Gh9^]nX_Ih60`M9;2w|Eu|9FE1u]fdO#-de\],gvae"M>r+t`2D]"[d?ͫQW!\ڥPdWIM2v5tswAUAҩ:&m{9,hMU^hQVIMVAHb(8TRRLuA⢻R4"JS!f X8xY+E %NFf0ERuw@$֯uzQ3}(A` -X~BSW$ Tβȯm2L3Mͦ0}=h`1@և6.?=%%oyIi.Yzd fm?[X!sV+Ϙ c8+o4p/jX94$F Jlfm 5j%)Q tkYewT)%O5+` ST uC#h#0l8QP()G7= .Wve[2K >_0[8WzH15 xvd@V|j_/nS֓6ҏ{03}ƨGAi `CNGzt\iEF4eRS朑E9%1 B8T a1{b@)ܧɕG>:wg{r罋to{=y:O3<,MOohzgӫvj?=W3O|MW_uy5K@lO?a3ճG|ysPWHtژH?!޻: 2ׇܼ//9<tJL[.1zoܿ.Gnopɵؿ]Ⱥwe'[cク\:7 Azz0}H˄7K@ʔwtuޫg=9t>Ejm>.l ;?9^>RfpIfTd;F|;}ys9?^{;>;?8YOR#jLPͿ6ÅyGϏ{G7yO/q͈x52{?'μӗWwe?{B1KH)^(DZz% ;AC )7 ]= Ҧ exvϫӧ I[jbLK:u}/=(1Kܖ{o+~? @@2C/RZˆ.C?椽wb;4Z)$ 2^Jw;ǽyFxjKx}OVZ`ݽUnN.M~8O{`w/{qi7|oWjkg~=?{ tr{ҿOW;'M6kŭ'sgpo cs7Mw:ٽbvufabx2E!$k浍f.G`Z3/?HMfùG/yzϗ :i^I9,^cDNtL_lƾ:]e0gWqrSu pȱ8M<t}=4q<=zy5, z'cFI0y\= _3,^+ӷK-? X3 ]ܟp<>\&wΦn~>:FϞϿJT`Osi2L5X,{ut6FiCNfߧK2e~j?L} Z] ,X?.;=K;Y+iT^K:&?46+wgc-{2;IR*E.5sK'7AfK~ԹPm'E:o;RY)πmgܷo~ o;F~,V%PE6O;u+=U*jes 괗jLA 9_ `m\+9>9&EKL 31Ŵ _ϕ]`ܴvmwm/&u_- CX5Vn•.jY+yNQ/lOЏ}|86<2 L6)Yc*#i yZe0rŠ~},ْKw?^ï\dl'T W7Tl xP]TbmC,|o`^c&;e3B„ Z؏9}=Ǹ.J;s\Z3P "Imn>.;JS] mtY8NLKrfI!)> dT~C#Dg/-AWZ;"7t:.5WO$lh^d(M!@?2\ƥ.P~TpYVP$c$᱕DyDq5ESpXUM%oB#EFZڌn"]k\O+֭r4꺂k[ !I3b9Uiwx@ o\ Ş`f)f$R)jL,cP Q5\B-&apSΫK_=Yu;T\U1!G^8¶?YUJ~C/B;W dCGahHDcX$beU`1](Dž1>ПFW>ۆ~ʎaEQجa_o~,d4pn.?O YFZY%,%,4D @DO ~ (*r[f4f[[2'+HPDPQz="a[Rt%m)% )Ca$8>3;$9GRaI`ׇ|3.ּO`ld0 "ĭl?xcA.P[8qQ&؉ Z&^e AaQI:ŬF\W9cJ$9ղxljJ>U1nI{1{O]z4o4j\4fMrW@Giq_(d*dǩ HtHbͅ Y V <gB(E1K 3|h 9z6W˅Buy0Ւ)T"1=*#c 6R2w!NZ)WUĵ=W X}TB+ec@a,i$mP.N4Q)Mcd;O@0ԡ(P!Εy `*Xtd9yzgN*2 vke / t].LwFcH vrf|&a޻Tn;.MGs-sZE|*[HXDa Ŕ P2Z \L@Q)_n w@v, kz!څ5xek(|J@X"1NnC@Mۡ hL3 LJVp`J'`? {;ԄިsJ(oO"GibԶaQbA.BvOvk:~jQ`Ϻ_bI i :Bt WWށ1j&5 L'Ā3DClQ #&0& )Mqڟ10HCO|h4s[?\]jUxA?F$;l!H &ƜQ |KKQ飥rIyzuxnmߙI3N?/eynrqP4VP/?<??fS&›ܷmbZ LzsɟU9}GiZS\ZM4 Cm|0bAY\TQm}(qnP% i)FUJzoQ( P1*YG$p*_@ͦw^<Y&`NpZ4 Pi4e!wHȚO{q$~2\]>?N.w-Y/й*ty~8^77PMnF") J5P/fjj_~^.FT<`^oFhBX |~9=T<`u&W Pc]"f|N5&O0>y(6^0eQȢPeړg[Xfm6m#S,hɃF Q* m@1&چPc}]gY!N>*ײVg4{܀ 7:tf9Y%hjL^"jO>à;2L"UV 5LZ(XCmz)PcbBL`{3ƒqC5b삳O> [!+MkF" '߈LF9PeZ՞|6h.)KB+a`8oT{vı5kM˻أAgOr~4gʦ~_?-cS2V9ȼ OR*eTi@VD`bҬͭiVCYw(U >V0 '0ݪuOFteVu/Y}6O!I^=qon?v}G->xoj4@0\c9a ) 7^Wc|X+sV@{TI};{g',e_윽w I7j>ܔyJ Gx 2wOgQZ7hۄ*Ou%WX?k5>Ds R%p_|x{ [g?a! ,5Ϩ3OV9ʮyKaU遖sj,?fsZwbnz5ƘeC, @#FZt-ܕS;ma֩~`Mgj"[1o:5#Ʈ~ߝdghn8nmtcrbbΉoDZ[KBc\thRgFS\P{(V(Ri)k7\exAY{o:m)>9"=Ƚ}m{g>"@xpz}sl& ,*o}EPgW| FkփʦS`LgON#c5vփKd<P0ld;ӱH'pMŽH+uz+AMzVkA ccM{S#50c٧ٱ{?֔ Z;ǣ]]'cJ-djXAnm_r|=)b8Tv󌷰tc ,Ԓ  όb9}z{CP  ޠdNٷCAv*t_>NA*f@11@a@y9*# jpi:c;W(_w qtH\E~) TklhT ΰ!ݢ*]bRa3h޳:;Cwex{Ax.DuEսf>`RV 5-K.LD,~_zĄ1ƆqK#n(DxCV6,"XB)9RG,s,yWVߴ ]&w! 8DGcnt*6D쿽xѻGqKvD%7=-5[y) 1-S$EK9tP|I0Acq^!TpA SǽŗrX^|-!\ S|I"_:K;erz<\vPj 1'͡jTgӮ-?w̧nV(]}PPg\gZҘ˼sGGݫ41LeӼ+Clo ^lxll{K0aպwF'cj 7+۲g*rᄒPD裎#+R m$B ]"RсE$6h.B]T%1^$jA:=W]81g'4:@ #( Hcŭ QSi{pAſqz/=?RV<@TEʣ࣠iM$$k)k J JSלr3W^W1K FΣѷ0Z+NE]uLּ[KK&#`4R& ̸F-, \@ 9Ţt-.A5‰`RQnQdzL"8{9PQ##$ZNQU{A`h+t<u! 1 `H] ]$<\ ^f;٧9׏'S5^X*Y LA2Җhie*O2uCI0AMҊh>զw=x6DWC[DzgGӼTzP _+V,=}w* Zփbᡨ'SpyR;ŅzN!N@)2 ;pHvZVuSh &%kU"0V׺̓W'7yos,R#/ZaDPZx9ǭTFhd# H)Aͻ˿\b24/798JUӉZ48;O޴AjttAf`JT )wz(qaI^Of)Qz~f˝l-fo/fw.#%$r 㷓.I~㹃]E3OoZq_|:HUJx9&uo|Qm:_ I<#P({]wGLɫaE#Mۋ!6 mVҳX$*[S4__ EH{qFFP#F}߽Oƺf@E\ʮ)/EgasYh;a4a |u#,)"f&ppTB"!%XZjNdxx*jH#EupA7ҙJ(ņUk:,lCnC  u@zHچ0.gueu-gu+ e-q|;/M>ih|-HH1>y"smH%Bu v{zpJ199xbvN;=”Lv mz-$bJb19}GOҁcJ )I1BC4jL)60=F MAb2tbFRr*n)6['cRȧSƷ jT+H}K% {=tLHcjfFp /U7Hۂ3$[MvC+.sGqJwFJv)"O(8B'T/<*y.AC{w  BI:\W8&Qmz|S姜)I{b)9ͷOBPS=#wsⱒR͵ϛ~!# ixL(/.YIc$} wYn2{t ֧4F]t/a)\=p6hj}H_n@vhMOcߥM_wMNSڣyޏ~8l{( fUM湃9tMWph^֛Yk>/bPq3_Gpsxwnûg9ެO7fB7ڪJ2o0WWUz TA/ʳADd0ϛף{6fVwX/Ks7p 0@?|^SO;-y6gϸ Hj/ӿ u/_@<(|sɏE>?Y~q.EieK6< =_7|8lfiׯ#1:L/X,eƛ F}Zj P*r"HVfPEf1#*dOckԳSM[-.:50U &~KNG@ąWHĘ#ZpU\oU^_5#?KTwb:'3 c {fK?}w[A^uc}Fю[GI?38u,jxJlm0f{np;08s_HћgC34Rb4`Ga}BN`ӢS )|<;vЃS=~vKLT|ǁ} {pׯCɣ"w+ 4>6|ȶPH2)0T]QCBB_4#a-@o7yM:_7Wf)7Y1/+xG>_{+pkڎ AQa5We [|Жt{l-|xL`0;aQmUEX_C7TN Гڽky2>e%m wLzPjp1aaow7͎gnð0ca9wr' 3J9(_pUxtߧonݾ| 8x\m0 '"D(5gGCpԍiJ9`6Vsp lJhFQF~apRdiG,>P5;( T'uWpM+>T%E C4u? 8cB빠dJ:eW~}6r{{" <EI|s["yi9,#~NÅVaFXyXDQa q #.Ôa ؄rzG nv/{_Ӯx X9]!gcSpixEB[ 8we.敦/bz-u- * MޏoU@oww?ѫq*g;U1 (TL>CA`=# 2)>?ȼY)w:)uaB -B e4&|r}[8fŵ41:BZS`*t&522cH%w Aqx 8=FdΕ09pHlmK3+1qMk:@@8I7ܗ榰pԀ/b3v .g]^sî=d9(S,<K'Me/aJVu0),r.bX,V6]*4x RǍKR;O5 MW{&rSX02j|hS0r ̫ Fށ#S?SvnF`0q 9 k9Ɣ[ 5)Hnmk&" 閵c"CmɣMq<ŒɅ1p+ '؍aqʕ"s# *ȋƒI937sLÚ1r2L Sָ{8o!-F1ay+ ٧͎LH{2aޓi9vR` 0:I9Pa@ +M[.亻*=}vvphB[0J9%C\ mG]t-b^nkl%2$wZ a"U$dX-ۀC@vFXCb"ٺ-@Ҹyӓ<|#;b]Zܟ0~6Y-b~u,]ۭYlu j%G,Gd\Rp祕@,wJcC%̥_n9|sI^ݥh$Nśzp7e# @GL=L30HC i^"zb4ʉ<,ap]lW. Ɛo CȤs#heRZCZq&MQJ$ 81ȡLd#:A1 PjVr!$;Ƒ ^M7ZkoE  3=Q.?m~Zdvn86H,@ S^)+^8 {@O:' \یg T^08# T_gv^j$,qp˨5%xȓ #2f˴>s1 }nj!a2!@ϝ{p>zp})M@},h.QۚGsuvs.$X2"9S>wCҚ5VZ dP W}kQ|0"4PR2G & tMHȜ VnC6,k:Ld0Qqc˼:B΍:@L9nT܏g)뼏+Qa:}X`xQQH=m'It˝CE48s`šϗ Mı24R+ӔT>"r}$`@RT*?&PƮ2zqX btURD[GFVS҂㱞X4$ԄY-чEF_+X;޶ 6u%qza43^h˨𤺊P+|ŅB$z ZCbN껇l6HR8v dP<}hWE-C)KWvP}o A>.jcTчs<=V5iԻh~\UC؃a%h OXSxl]3FF2MʔB#5+Z%XcbфAu Q o_ g8儎W#()-bA23Zuє;367? 2X) (K$cڃl&F )/n*5f",EiUܗTrRM>K`XMTWA稗TAZޙiTcüGgO]v|/7/?Zv{aTM{_ϋO/vIow?ۤrI" 6ϼTRूbD#ns0̎61k C^S%4vld):Yo~Aj)suAv!HbƊڝu#15 ,p-ʽ  '2|a57-_VHf́ޕ/KI=YCT+k?{?\5WX|(Yڲ@y=z" _քFğC%yW5޼~g3ӭLw:mWU<(`ۂw~_*K'/˭ *ͅ1 tbP[#8Jړ?@e)P,(.]G̹&wl*gͥzn5._Rv]?CKU>ҿwA?' 2[xwBz 7+J˭owSJwSlV̗MsQ X;RR¤cN#nZq-iߖS52^ks!EsT@!IC=Sz:?b Ty5sʍ9k k8,#DmI)s)eHÇ9\7Y-ؕS,l˳ }*XٚiO͖`rnJ?xrggslY՟zs^zgp _|V,{r6ssm'*sa,7lUdv>ǀ,|VћJRJ+- cǨ`P2͘E(0#,?<\ml0qt,&$XPԼpF̊ٯo>?Y śixrQ)k@5K_?`Õ3Hր;u<[!l55@j|=V֫:_^&&8ٻ8,+>.*ŀ^Rn{dJS$HDULʽt0LFFs.k #*K`;Bd")TsH<v"]>Ѿ~qyYb V gZ=twUT8?1}FiC2g.v'RB ; j?-V .KTN\^Z:E%fKٚmZ6+-o7P,w((%tG/ngWwdreM_e~+|_O?+>f( },1SE\̄ gh OQ:Ƅ2QkQ>njP.+W-C κ~[(.7ߘB*nn\6۴X"2o??ҫ=Z50rU)_*.ҧ[3 ዅJ ?E͖?S W_ f,nb4+I# =[kOoU9fq6Z~nQfȠ$qlWluO%OJsjJS*b9j^УVCZV8)[O [L,bҚu8w=p\y~?-B>\tgp˚пs.[ܲy&G K g|Ӛ"P*+i_IlgK/wZ03#}flI\$͝s^ɔ(/ $h"J]6bv]ٻYQӃ 3OτA^cNV·6:8S`4qJsV^rRFi">K((̼ v^구Z:iXM)eANtTv#l 8Az7&(?1Fٹ0p$Kw{絖z= "0*4 5Xe`WB$ɶVhO*<`KWX5[`&fȚ‡qjc+kQKVPtnS6Fp M{0!4,g$f%x1ȴ\|4x`;}[kk7RP\:;{}?Eii em4WQvҫi"O:zm1uR97ržv |ӬڬF+cZo봅u ~<s+uOۺsj ՗ZT_ͯ|n~6*ͬ _^dryHyAn\z`j}HF#)΃{L'һZ2Ҋ^&]ov'. g}u1֕r }c|_^a@n{(5F[C`eBT2:{-ggZ*+<7zŠ'} n^|7~>맋գ]>vj?.˯xaևonN^6̜Cm9tbZe?RWIU6 Eޥ dgLZEg]>K5{gP< - 9jc?א;s}sb {IG B*搘*5;+B0f $;I(\0S*> Isxu!)b:8c(S B3H&%%},朻qm唶RiK;}okΤ3wXC+,1yv=dJ2i{Z/z.#he55_}d]_aC01r bOgc9v_Ҋ/ Ь 2wL CK>G$9ݖxI%ѫ ;办}VOjp O7FшvJ9AVD?2VN}u5QZFZ̞KtX+0V.yoLwׯ蜣I tE(zŹ2Dh5sm^v|G*uk<{|c}欞Z93g~È8k _%ڋ| Ggqw7zЄQx;ӘcݍFgqHC-uFqif- GFuXS3U1eThMV{ع/IgC}ٚ3B&7֯{BXAcWnkmG>6ҠJ;fuk( [*IlI:X@@UykdDX y9iKy/T*r{DŽ`z-޹=zWWBb]J,7+ڹ7t{7Z8I}ea}W(.ʑ%OǴ@{wģtWru' hRDNmk7K[W No4nsf Bݺ⩩?v)%'qni7ɸvA&QdҌ:vCB"^żz]̫žyf)61t;^ Bf.Ik֧L -&8^:ڴ5u*4'5mNLZ NL:Z_~9\ۭ[L-g{\Zw< 37q!h$5c 9g>>CK]@`JZoNߪ$©V"f*ul:ԃIjh).-~Z Ɣ!ו}CBQc=+V' V^^)jOCziҰrjHyj/0ĸ O֞|N>нipY$Y-2WF+5ĞoLrG,Q[LX,(Z/Pzy=UhD&ï0q.{RNGa޹}y2x۽ǀnKBPԴwxGQ[]|nNVﴔDVR494z{tJwwc¨c! |v[HeaT>x ?%&}3eԸePpfF=7z^bD G+nA}fK:ҍTcp-chzoQR$An0lhO%2s 24FJ"Uq׾A5P&U{m' 7=QKÞ{ 1$J? *IiqIqFSJ񨵳gzM-ޫ#+:S>DʶsƶTfeB6N SR”d,TTITVB $ dEZAQ iB $g] w&i#6фp@womZ5C٦UA50:o  {9w6ulIZՈ2\$"@jȁjĨN{]X"Χ7~*ЩZMs0y9[J]mZujث5p3&/CUj p ML"CZ@bi 3÷W P,š{DSk̯hFD)@t3|Z݄,#L&)!Ĩ.u {&!,"k*w /7MXZgEp;# ( ڌa 8 # B??(ۭB6OO *4RyR~p@6)vs(pv|2@!F򗀢D FWL1&B47A^)rw<ɲ#~~ᶞb^]'o8-\ЎWzB[Dr7 2y>]ipg~\}zqeR9J{޴ոZd͘s.^KYn`y@ . a-DkB =4+f˟2 nq&NK%/o0,k<_^ί?N9or0ۅߞ\+cM.vwBlr6X ψ-$E)ɟv>ҵyk_ x@ 8dbKjScl %"}}=en7 ΀6^$$_'?{{_$fH+Ω?Q ^xnFU'4,_5t´jmҙ 3sQ/IXy95W2'u?f2?|I Cc 0yr]karir6 :4uK7oF  vOՈvz㖈(Եi8eҴk3`wd&F 3 ~vBZ_>w+ڲʮqe6gW?z6ywyi~6Yy'2z@hu;w؇U:Eꥃ(&#[WG k?~YL8;-8d# fI5бE荜bh|q)@1[J*M/:w9Ň1K/:w̚@vAon@sk!5E[3&oSwzZFm ,b[r&jj/"@vׁMяMz2s I'a%zsqn!K\o#fqwv79&RːT6.1Jog6vUZ~#3V^7@wSĴr5d?J1ChU$#8-1oW@>&r'rO{6w?eNYGhO=[MR3jI&mKKYo-ddtCsM)z݄Vcn6qh,yđ[tKݺE7|JD@(6菫Vͥ{o'7:I hx.?>;./CeMuzpfaR_lI*R#adE.2Cݐ^O v>yQzid,ezFL2\CP,SX 9).=` ,U#mHZRQ/6\bq{)'q^I > /'ņԘJyңRT֣R4H-:#Rq^ 8tRSO51{B8KseP=bSj8/df0J糳?}䠰N1yu~a/6e*];m՗ =\jG4۳ 5tӵ$P$ J%j`QÏ}_AlPTAIfֆQ==ׁ a]O4R^:# xewϿ7 $Tӂt!qc9kn¯?N3VۛD|΋ݫ\U.ϱCdԸb~ṵPw;L(SDJUEǂq4"md=fH)990 F0he~DEza"EfLkQ5ԗ*­66,'5!#r͉ZY0*'2wʹL*rg" ՗ PJ!c:+%[I6(mMEa 2jTiɑ\ w|-PtcGyŘ6rsD*?Z6EMԖ4`s|y`ݷƌ"ҌbC 9TPKV|wPkd7:w#!8f#G=(0fYQPSAQY΀:I+c,>y./= fD\z"-iHSE :63 yMUiyBTlǸHR iFKգw֭}!1MjڐǚI})5A>"FVVb+߆:!7ZpW4鍮R9qiY1>R#ڟ|hQ4JN wV5# Υ W JMsW Ko]W mx(^m[COڰo#fZM] j-5"-'շ1aT-c6L1mix̊Ƙ8.MΫGY($4`LJ8b1!ZI#cbkfL\=;bt˜ QL'[RWl]jѤupEGTZW֑E|m˿&IACg4XhGҎt!߹&h; ARm:$ j-Gn]xw}J1IVR{[_8S{#ºBML-צ075tơe`FPSF3A2): r2R^9c iT&* LV8yF_M)SO>,"(؃R!㼴S#R("/RSG쥔ĵ ([y)K_bSjJN3{im2祌S,=n/S, ^*F)KKy)W+\H *:T ]B)̉Γ̙kiw Cr!~ Mi Q4jFrGQGW>NPpͯ> g'WjrmmX^~Wnu s ?_/_'>/)S3WZ}0`󹽜,w?Qʅ/2`f7 FʳhcRcO˼>{0e*q/ȜM Da+*'sFy$*GpOLI~uxv4j}_#1=w^%T6$Pʦƨ~{$8u ΃k'(ZO>LgoX^^KB3 paDf$QLsj)ñ RߊBpg|Hj+1z8 *Vb=`~;,)(v F)v?{W#ǍJ_rAIzYg`lxo^yvV6̬RwVd2*=Z`B4zi||}LI7Y4Ljԛsd*T! [* r[xOeE^`@.p`DCQ-\e j=u,)ɷ zܼ\ d?KW#۷ּ,AUs\iM( UCιR:حRHTL-bKDGџo G=-ؑX؂'Kk1Fd3u `!I54`gxjQĆO90RsF '߫ͧFWmd:b ]Vpgӣq}t(JI8^tAs8hP0tԈe0p zO&z`yjpIz?]C%x⺦vT''Uib ww7FcHu; 0 Y9/޵Y B ȕoCްU`QHbد{74ݻy J*nb Ӽjx<ɇ#Q;]\D[J>|p 2mu \-Dn"u]4@r;qW@OEZ)gG_cWӥi]5HE{EIƆJ={tم@1>R_ES[UO)g?AFpOa+.{ari~)J'3Nrt8D >&=\^ 쇮)Z$BEAq2&㖘*J CŔ,(48@B!*k `ҝU20tn% S),Pf1/4iC r#`( Fm6-.{G:v+O/J7ZZ=bX!|*PLhA?ӐJ ֕EE%s (:^."Tp42[??"zN#Ͷ@ R \t݆WU #-f^V2tJs22Fƅz=TqTGm^S笮:O_+oո߿|)_t _oxWo1آS??KS wj^}_"D&MUC~҇~_z%G5?wU>MRAcKԗ*Zo_wK[In˄T]VFG3.crq7iGazQޑG`WԢ'~FqN@zYnzC- T44_rC8p7nnyP Zr#(y}=|$4 g#̰hKj)iӜfj&81-V6?&cPm]7'z7ˏU[m6_W^;xW׊z9}L}hru֓&ٔٯލzT BL'1mˈ&-=-лa!'n)6efqny7z-IFwr!E}h7іMr_SQ!/%e&ma3PU,?\:,Gkʑ+[X* f;/*)URvR%![I8*Ζd?Z}*i氬WmY5^:nnuJ}jSղoRq:R⇰E8J5*Ur 8ӤJfx4>h4Vyr}Q0},~-q=ࠏNdj B MY.a.؟*Ff^ߞ2ӱV{=xjϑ-HA;rW)ip$ҤtԌ;ɹ8;S]6 s;]-!b'P 0ӇW3ܐcp)7"P\dH]bSLfI \2h\;e!2,M4ɦ\mF [*!6aeȿx4{7fqny7l7wK tRݦEf-==лa!'n- ̾z g2SCdV!oCf6x*}I{s[T'*"yw֣Yru^7;Nv?uqyioqƓ_\7[$ Vs^|||}|x{~Fx훯b\-Vmbholk/uw-o,xerFo/v'dTӠaEPB$ȅEI74[z;߼{{af.PMS zlڟ >@ pgXKՊk0>ݺ̓mșe# Nɇy"ǍCN$lbZ \-آHQ)݇b8|1V%ay #$᥅/BX8# *)?Q),d\̪ ?9-!% ϒO _?3TG(ͬ ´bnl*.B#DVVEy'CMLHx^ cӓs@HjGV|⍶ ,Dkje䮄xQSCTUK*9e)t0ZJP}eE(c-%ik$ IƎm3N(8C QAVaV8o'E&y;Sra+WQqq |6JZQ'OYWPj'UփI? >'}t+b3is1+QV`Q MFJZ.*SZxsYPϽ뱿}v:Kŧ﫠+6a3RcQuq 4MO}(pJ9Dr3|΋{_/]]SK'+[j6<eŻovJ_܅_`'d6"C?~:=QB!^AXeq`Q ECl$YmAdfKl7^YBܨ!4ubط@JZalU/D$/I6[,So>!/L2a iK! {eXi˙d:+?{HQd;/z^jy;EZ)[* ܒW;-Eȗz *0</qi)̛ 8Ӑ+f:HPky T HW+_Nu/>%B-A %g,@ _;_l nO,NG LXOOXH|>I ܌GR?A!Fِԧfތ›e-SkGj- cYWOb¦QAoth8ǐGRҁ~et0p43 '1H$;۾O#w80Gj}{2V'owQFT%ݥ&#}rVuwJBRkd*!}߈ap֫g49a 9qM)fly7 ^ [*!6atuf457$"5sۻarBA~#ƻM0BѼ[zjwBND[69'x9?ľw w6N 'l6`?F$ L!öOXF4ŧ,8$ՖԤ\q%1VZ૞](]ag FʰKYOцr%ֺ@.c4FSJKb")!+ R*b ૱QQM؈9h_>avn&A&׷(K5}pLr B&k99TP܏uCA.QXI!|p?EPR_xF;]wSD9vNE1j6;~x btoM?v0.Dro}?v0h7{샽Rh= qɈa;TS;v2b ) Hףq-ܕ:mj[Xnz6z6 W`>ک`5:lt 156rui_RzRA_׀Ҡ %0K/&ۥ[g۴,IV F)==)a-Auy R ZQ]-4%ϝP;T_R 1m?O[JR$.ǨZ#G洔ʣR8?WN[J) RK-G!IiA5QJOYJQ1<fǩZRKO[J!R};m´,I),,zO-Agg73'>fD 'EQ<ՠ3>uhCR91|Ľ'IGٗZWV[4\{ㇷHJg\-pݲZiKbߚ|0+El,c V>Xk1.Uߪ@(uR=(x%ߪ&j0$!^[α{C"5 #8 un6k+Gn9{c^M`I[?t0pLJGj<-,gcX* 䰕Zg5~Ҝkkf xUurc3@aU@)(ID&gT?+ i.\\K!ù\T!櫢R*Nrk]L^its3g_*[ng-maW+2k}+g>c{CHdePϮn?gHRHyl?@XphH+6U3J%4RES{`Gu! >JnOUDm+B/Xr[-'a`LB H@S4Y#HMy/R Z;Af^" x*pI?Ѻ>' WJ>jmcC厰*N++S=&->q֘b-f@+Gc:J~B5h |yykqwr˵W/.sݻAߒj:0O$kA}]G8P f9V$Ed%R1Ms* cM3JFrd0"8'/~ݡ:E2tm^;KYJf5?<\:@:$%^ glpdҡLϺ {ǰFij<7btim?K}ykrӅʫzU#}5ifr5^'3ySޔ&#)[ 8CFrlurص!;X,X(>Hʦ~X d2w3c\ "bmDZ77~Õ5;:4tMd.N7sV`?>Qѯu2o..IbtS;3wKPi-vwFM8}2Þ}ޭ}ZhV*;y9̚=+wYF ۩ ĝf.qU4w쭸?9_{P{ێ=m~q7C؟,ô{`]ˋbĴs|6(3E:g7Xk{o'+/dfyw=jدoM!ⓓĸDW+<#/?{?`O|;/tWoׯ;p𥗫Ԝk\[= S<;_{`q䭵&dY:`zc署MKD2uΦ˳@/ڠb폺Kˇ^sA흙I>ς^%uh_z/zG}e[ z?\UZ J귯22?a٫W(LV0Y1W뱣cZj>|6r3@noGYgƂCK{{< F=jՠυt}2Cfi&;h_xlr3T};ef)E~=ԦjxiO$ܧ"j5pT+njazt)E/wQc*1(Bc_rpX6-Y]53tWk),ZL n*Gy)|QXl?ΎĂVZUvP$ :6ܽA2WSA2k'bD|2Chd)aBM3y%ĹTL 3P$싟s O=`*w؈r"u 4fF,Y C2@2eci<JLP$,51̤Y#uxEI>|v7U ePx^T)"㏭uۃD5ʦZYJnaqb!>Gf-%0fƅMuu}NA2fn*mOv]f|- gP(2F)O1yd'P!]̬Ubq-D1`ĀqK_uLjMe 7Xu﮳"PL2}; BehGs8.Gs8qp<.2=eaRAbЎh{,xѸ Jq2i=à9 PƐI!@,c$G!#F1L$֜GqqL,R8QTJ2i#dR1dhǐI49C&C&(q]%cZLJt㣙$_D_TXJ)]PT" X581 f}繳䦖 eoaTO5.mct]TΖRz2k28oL x+ePC:|ޒ&逸ѼN_SF䲌A|6רJa.F5(!‰O(Pɺ, o>`\fWܥb=9\៓1 Іg.HBĞ*GD"UXM,(mW1@@q%X3@$xL<>(|l߆S+d+=z v[Tz#$)ɐ"P Y+FaF+B3R)tn22\O˜H . LL A+fy *1ȰBOS 8WÖ& 9Rܤ7bI0Q+\qdŃR hHFdTa219Z D!L&L$Tfc {KqѨgNf3E܎^$`^}e"49-m ^PDƳǁR"I]O[IWfiӏ~M4Zݫrꨟ !Y6C9'sܿzWuw8.j-0^2ޠAwARz%h}>xyܐ`r@RZZd\uF= %|I7d@*M"JCΪ:'J.hn]'}O6D-= L9DtҤ_nr>!0lam)S# 66w{-/ XԘMхr{ek}HUiRRSZT;GL C$aAv`HdI N =R`czlBq@-;ԇa@䖘)0;Hneg?<~9a6204*IߍAj$D +Qlo~~Mc쑞FGOF o8ixD<̢R.u6hIq$>G?Η+nK&?:;׼v-x[V;6=},MDI&9 HК%EpvAqu/ zx4BL5F@ "9DWPVp# %&2gnk2j%FYyyImsߙmz\ 4:ϳ6R'$/"AJrnO7<ZQ $7?%Qdo$3> ]mP+@TV1@˭OEcݝeٸ AW3KA1G("+ӼKT$*Le$bPzW+~4K#jP}A))lu%8J؂5PUC`1!0_~Jg|գVo"Navzo[&LH䖘х$<=΂&H C;I_)9EnI2p€907&jxqyKJ@ͩ,PXgI>0$!YٹE 982f * mketw?91m#V2^|~߸5OJZd5`N(5+BV=j5U %{~#^ ꡽dOεQ7*võՍ 2Y[@CAۂ̆$Cl oϟ n\IŗN $w_ .cYM'^lv\K-$TPdpNJxE,B/KP,W˜,`8-/m(Do{ͷ{ԕg9e~!q|\ǨL@xJ,r: S܍}xd]7Kكc97xwjOR]I/S?;g)( A ٵ\} e߬*k<(,?9GFgW{~_p3?֕hGhI>_IR~÷"A[CeZQ2oew8P"`n>DC/ͪDEB7Pl([P>hccP:mTA cꭉhM6O??-GNal LNj+qfgǟޏ'bV.>&5xA# ֤6![Q"X%|=hk6eǟ‡sd%R$ s(/d-"?@Ce=63`DNJ/ X̂ Uxbk#<]|)[JH/ 91Ƭ!H&JƊTVYgsJc\!_-z%.?Yh\p2:bY(XI<$axp)ۖv r5{*;"a;^5ḃGEFg l)@U/89µ䷫ -7Z" -CKh?n\-wk9oѹak^7䀖hqC8<ø!a7ܵ0uǽ\ ^r![&eU;)g:vr|޵8Gnzhm%E`zRSl6Z |mܨK9+<$ZQ-)>wԷ2#;9UL*VG(:&so yya"`ö.d aR)쑿lh]$O!˱,{I%kU?b-.N 3ԧU ʽ@vi)\m _jW`2/O48K-Qr'ݒAk-sN*hu: jP-U7 طG*IB+waO Hq[x;b< 0I{O\:$C ;>@9yJQ'hłsq3^WknpOnD!X &" f(y5suPABI{$Kava3ȠQ%!ujP"vU+*9o)fQ/xG՟l,^w (`g j+-[7yCxݪY5P0 g}C[Bqbi{z)U7l<_W _禋& -%+Z |~uto')^k>>&d$hY&$ ,hv4,;i&aoytXy5N]>|y:.5yiѐ~ bJxU2?9W)䓳/,.g,+ OA,Ol?ϥk}[?ͳZ-rIo{g˝3继N`͑yQgQiǨ{>%Kqkx]®d|L`: `0aD?{-o`i yjċg{OD;X ~OAVaKitY&:lIs=Eޟw^ #p̈́Gw7_[/r7/B[PUy!H"EzEFR!I1A#7A$[B=ҩǹ: 7D6\甬UVԸ]5VٙspEu8 FNlFAI9FJ/)ې@.MxNv,F[Ss1zwwdLS/f)S#Swh<S)N97gq1O[9u3v[iGG Z܏樬={r8p".kU65QU@~.*+o3cqɮNA: D@3ql8͘tޕ5GnˆnVeX{G!~Dᚡ!irh77 hrn2 @$ */dfee~P׋~YrD dkg3uYu¾۰9CRd!I;P_x9=; yiV>Pz7GM DK/%|5`nݻ7xNc^4ґRc/.yxnX g΋Α\4ӤCTiMY7DkM1Nk<4y79abCjOMas 5/ `#uͱGV]&Ʉe}Gpoot:7X\rLVv4"'2LL-[ LqQl(W6+tEmK/@񬰖ی j ΛZ+We K&9o]HzG`J@ifZ|^Ue=S(KDQ)-ra ~e2'vtFvkM0Y]DՒ xTpg[ͬ_&ߖur=37WnW_V!Yh|50|]x۪)k ?IxSo߅&{ᾼmE1+ sF]\5O|'S4g/Z]HkWDQ6J;\.@\6Ś|R/ L[n d{66s6<ٯK.Zg}jNs|JUIn]A`.*Gd጗GJB%!La:L5(V7QhAȏ>D>j&BkSI.9 ȶtC E}zQ;|fBrsF? X>o1&Ȅ^Zo?6 ^ sVSYVJ/BXXw ^ `:h`XsmJo_j)V=rmG9 Z(s;(s]FHޱtE.$o!Z+t-Z[kk5dş.]Ө]2Znh:<:."ѳ"3äb"c79PO,b SJg!D(H6MH[mQQ+68Q u*~:R$ A,$QB4Ōe8ӈZ3+[הMf7v-iO =MoSNjhոX~pY4Z2n}/NN +Mxyqa,dfI2j}.WD/qpBuٌysкHz5ߺ٧=jAB^ؔ͹n-UI]#ƻ2:gh-Wn6Ł'xR-n8N+p-$ghZ||܍3YpŬvsP3 hRKWŊT8]w9L1.+v 2fqt2 Ϊ*S^9ϝ2>h>hiNZ^MoJ a `$ў0=o\z]gi1FFܨ,ύSAAVeq}Iw(QG, }6'Cs֞p0Uǿ|wT2pOjDґ/7ߝ`|0.2! ]ݯ}mdLi`׿>Υs'gsNKayZRc z%]3 sc%ߓ_(ޭ/52J>8+m'˃#F* 1GӐy34` F,n2ZnOd0~x骤ݾm [>įW>c - ^ձ9{2|tTǻhvq c1u2ς ՝y:y1?7"uRQW3t{ctNppKz#$ňolu%FK|ۭk} g;i:>ڀGJou&y9@#xq8[9U?-RIjtO=Qڲq7y8NZNFO8 :x.ύ.҂z)dMFOFL% d4hP!l֨ `1)+uМu' 8hT.R~ 2d"C E?'DVy^6̛LLS0,䕛hǦ" -iVz~iHK؅\g qy.!Єs &/S n 4_ E3'>Q[RD"fO[y+ڲyVD@U#f="vzxμyk+{{E8# N~(+4GhiWO9} `/ӟe.7:dZ"bSHJIpgv~ V 1Rnjs/K9UKȡPLe吡q*/+$Mc"pL;Zkx"di.Ch2lȂ=Jn bHa2vfR y JJr%R*EZ!b!:EU@h@9GъxJ"J>Գ[-5Rɹfg@U5\a$.> ʡQŒn9 5Hl c`&g9BRYn5nM\D_Ż_X!u#* ZL+;86 M;7V C倸?n1X6bL` Z 4ذ{@;Ghwƍ$Fd qlõ5}3ZDvP-kG]NRDZmG|?`n`(+ܳFm%%1&-?MFcq(־ w-&h3@4_nx$1Xc"wOU$읨zU0"RK H8dqr :C<= YF{}8diq8Yǭqj q*V8|` mA*tG D",};R}-e}BP ;:C'|֐Fs.YW TYP9׹- !L洬4&+/D%E5=I6U({j߯@l2BZr@Qr Tag1,FmzA8&4Yf3nVB~1ȝG_ ˸ޛ~SK /**@GiYs#7lb?@}N*Uh5bSQÆJϺ)}X+7$ziʆw`'*餮zN*}LWnI6eM玲] HbkD!l :m48]*d,䕛6AsjF_:#AfƧ_տǞbb9w:rhovD\1;'w6>MQ̚%I-9A %Np2YȪx=eNZ;^ONS<=WOYFS6)18K(> %`EufC G6 F 'b U;vE 1=g# oT(&/BŠdN3J2]RrVYf*Vsi/J0**D=fIT+aSa³/եTEVqYd[\!*+<*f64*[T# =A.0j!ewޥZ 6TPkΧNe&!QM4ŦtkVqzû.'Ѻ*ҍkD]{ u lK{3`!DlJޭ=BtH($Gx[2dփr2TyVu~n37x K ӓ@ľ͚j>#;$*ŀ> +ֳ*ERȩfV%ٗ?St˟var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004775051315136714754017726 0ustar rootrootJan 29 16:29:06 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 16:29:06 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:06 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 16:29:07 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:29:07 crc kubenswrapper[4813]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:07.999768 4813 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012311 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012388 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012398 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012406 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012413 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012420 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012427 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012433 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012440 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012446 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012455 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012466 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012475 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012484 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012493 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012502 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012511 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012518 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012525 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012532 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012538 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012544 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012551 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012557 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012565 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012573 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012581 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012589 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012597 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012606 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012612 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012633 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012640 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012647 4813 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012655 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012663 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012670 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012677 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012683 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012690 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012696 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012704 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012712 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012720 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012727 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012733 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012740 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012748 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012755 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012761 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012767 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012774 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012781 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012788 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012794 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012800 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012806 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012813 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012819 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012826 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012834 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012841 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012850 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012857 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012865 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012872 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012879 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012885 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012891 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012897 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.012903 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013060 4813 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013079 4813 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013098 4813 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013134 4813 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013145 4813 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013152 4813 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013163 4813 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013173 4813 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013183 4813 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013191 4813 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013199 4813 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013218 4813 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013226 4813 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013233 4813 flags.go:64] FLAG: --cgroup-root="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013240 4813 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013248 4813 flags.go:64] FLAG: --client-ca-file="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013255 4813 flags.go:64] FLAG: --cloud-config="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013263 4813 flags.go:64] FLAG: --cloud-provider="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013270 4813 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013283 4813 flags.go:64] FLAG: --cluster-domain="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013290 4813 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013297 4813 flags.go:64] FLAG: --config-dir="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013305 4813 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013314 4813 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013325 4813 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013334 4813 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013342 4813 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013350 4813 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013359 4813 flags.go:64] FLAG: --contention-profiling="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013367 4813 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013374 4813 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013382 4813 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013390 4813 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013408 4813 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013415 4813 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013422 4813 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013429 4813 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013437 4813 flags.go:64] FLAG: --enable-server="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013444 4813 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013462 4813 flags.go:64] FLAG: --event-burst="100" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013472 4813 flags.go:64] FLAG: --event-qps="50" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013480 4813 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013487 4813 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013495 4813 flags.go:64] FLAG: --eviction-hard="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013504 4813 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013512 4813 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013518 4813 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013540 4813 flags.go:64] FLAG: --eviction-soft="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013547 4813 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013555 4813 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013562 4813 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013569 4813 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013576 4813 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013584 4813 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013591 4813 flags.go:64] FLAG: --feature-gates="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013600 4813 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013608 4813 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013615 4813 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013622 4813 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013630 4813 flags.go:64] FLAG: --healthz-port="10248" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013637 4813 flags.go:64] FLAG: --help="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013644 4813 flags.go:64] FLAG: --hostname-override="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013651 4813 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013658 4813 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013666 4813 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013672 4813 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013679 4813 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013686 4813 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013694 4813 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013701 4813 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013708 4813 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013716 4813 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013725 4813 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013732 4813 flags.go:64] FLAG: --kube-reserved="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013739 4813 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013746 4813 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013753 4813 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013761 4813 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013768 4813 flags.go:64] FLAG: --lock-file="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013775 4813 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013782 4813 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013789 4813 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013800 4813 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013819 4813 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013826 4813 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013833 4813 flags.go:64] FLAG: --logging-format="text" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013840 4813 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013848 4813 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013855 4813 flags.go:64] FLAG: --manifest-url="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013863 4813 flags.go:64] FLAG: --manifest-url-header="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013872 4813 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013880 4813 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013889 4813 flags.go:64] FLAG: --max-pods="110" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013896 4813 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013903 4813 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013910 4813 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013918 4813 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013925 4813 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013932 4813 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013940 4813 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013959 4813 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013966 4813 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013974 4813 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013982 4813 flags.go:64] FLAG: --pod-cidr="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.013990 4813 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014003 4813 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014011 4813 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014019 4813 flags.go:64] FLAG: --pods-per-core="0" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014027 4813 flags.go:64] FLAG: --port="10250" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014036 4813 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014044 4813 flags.go:64] FLAG: --provider-id="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014052 4813 flags.go:64] FLAG: --qos-reserved="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014059 4813 flags.go:64] FLAG: --read-only-port="10255" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014066 4813 flags.go:64] FLAG: --register-node="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014074 4813 flags.go:64] FLAG: --register-schedulable="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014082 4813 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014096 4813 flags.go:64] FLAG: --registry-burst="10" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014103 4813 flags.go:64] FLAG: --registry-qps="5" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014133 4813 flags.go:64] FLAG: --reserved-cpus="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014152 4813 flags.go:64] FLAG: --reserved-memory="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014162 4813 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014169 4813 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014177 4813 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014184 4813 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014191 4813 flags.go:64] FLAG: --runonce="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014198 4813 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014206 4813 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014214 4813 flags.go:64] FLAG: --seccomp-default="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014221 4813 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014227 4813 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014235 4813 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014242 4813 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014251 4813 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014257 4813 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014264 4813 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014273 4813 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014280 4813 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014289 4813 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014297 4813 flags.go:64] FLAG: --system-cgroups="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014304 4813 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014317 4813 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014324 4813 flags.go:64] FLAG: --tls-cert-file="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014331 4813 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014346 4813 flags.go:64] FLAG: --tls-min-version="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014352 4813 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014359 4813 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014366 4813 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014374 4813 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014381 4813 flags.go:64] FLAG: --v="2" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014391 4813 flags.go:64] FLAG: --version="false" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014400 4813 flags.go:64] FLAG: --vmodule="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014409 4813 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.014416 4813 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014626 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014636 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014653 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014661 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014667 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014675 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014682 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014688 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014695 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014702 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014711 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014720 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014727 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014734 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014744 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014752 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014767 4813 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014773 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014782 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014790 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014798 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014805 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014812 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014818 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014824 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014831 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014837 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014843 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014849 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014855 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014861 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014868 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014874 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014881 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014887 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014893 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014899 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014905 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014922 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014928 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014953 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014961 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014969 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014976 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014982 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014989 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.014996 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015003 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015012 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015018 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015025 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015031 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015037 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015043 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015049 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015055 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015061 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015067 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015075 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015082 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015089 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015095 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015102 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015134 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015141 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015147 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015154 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015161 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015170 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015177 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.015184 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.015194 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.028286 4813 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.028351 4813 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028501 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028533 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028547 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028558 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028569 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028579 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028589 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028599 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028608 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028617 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028626 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028823 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028831 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028841 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028849 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028858 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028867 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028875 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028884 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028894 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028903 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028913 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028923 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028932 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028941 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028949 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028957 4813 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028966 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028975 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028988 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.028999 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029010 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029021 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029032 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029044 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029055 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029069 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029085 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029098 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029144 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029156 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029167 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029177 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029186 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029195 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029203 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029212 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029220 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029229 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029238 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029246 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029255 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029263 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029272 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029286 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029298 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029309 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029319 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029328 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029337 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029347 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029356 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029364 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029372 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029381 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029392 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029403 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029414 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029423 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029431 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029441 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.029457 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029929 4813 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029948 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029962 4813 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029976 4813 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029987 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.029997 4813 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030006 4813 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030015 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030024 4813 feature_gate.go:330] unrecognized feature gate: Example Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030032 4813 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030041 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030050 4813 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030058 4813 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030068 4813 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030076 4813 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030085 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030093 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030102 4813 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030142 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030151 4813 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030159 4813 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030168 4813 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030178 4813 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030186 4813 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030195 4813 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030204 4813 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030212 4813 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030222 4813 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030230 4813 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030240 4813 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030249 4813 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030258 4813 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030266 4813 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030275 4813 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030283 4813 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030292 4813 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030300 4813 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030311 4813 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030322 4813 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030331 4813 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030341 4813 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030349 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030358 4813 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030368 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030377 4813 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030386 4813 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030395 4813 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030404 4813 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030412 4813 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030420 4813 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030431 4813 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030442 4813 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030451 4813 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030462 4813 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030473 4813 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030482 4813 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030492 4813 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030500 4813 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030508 4813 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030516 4813 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030525 4813 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030534 4813 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030542 4813 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030550 4813 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030558 4813 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030568 4813 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030577 4813 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030585 4813 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030594 4813 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030603 4813 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.030611 4813 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.030625 4813 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.030951 4813 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.036642 4813 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.036762 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.039255 4813 server.go:997] "Starting client certificate rotation" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.039293 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.040983 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-25 05:41:40.861225011 +0000 UTC Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.041062 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.067182 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.069569 4813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.070819 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.086843 4813 log.go:25] "Validated CRI v1 runtime API" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.117942 4813 log.go:25] "Validated CRI v1 image API" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.120691 4813 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.125733 4813 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-16-24-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.125769 4813 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.143901 4813 manager.go:217] Machine: {Timestamp:2026-01-29 16:29:08.140089096 +0000 UTC m=+0.627292332 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b2b97708-a7a8-4117-92e9-16865fc3d92d BootID:d38b52e6-6d60-4e06-b22e-49bd8ff8645c Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c8:be:b6 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c8:be:b6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b1:f9:19 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:24:01:d8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:23:90:27 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6c:81:6a Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b0:f3:41 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:a7:e1:30:30:ab Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:0d:4d:94:c7:09 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.144194 4813 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.144367 4813 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.144774 4813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.144977 4813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.145023 4813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.145421 4813 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.145434 4813 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.146256 4813 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.146294 4813 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.147010 4813 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.147125 4813 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.153709 4813 kubelet.go:418] "Attempting to sync node with API server" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.153745 4813 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.153779 4813 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.153799 4813 kubelet.go:324] "Adding apiserver pod source" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.153816 4813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.159421 4813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.161133 4813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.161362 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.161431 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.162749 4813 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.163555 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.163620 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164544 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164581 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164591 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164604 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164619 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164629 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164638 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164655 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164664 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164674 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164705 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.164715 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.168180 4813 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.168912 4813 server.go:1280] "Started kubelet" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.172851 4813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.173055 4813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:29:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.175720 4813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.177094 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.178714 4813 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.187499 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.187538 4813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.188062 4813 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.188079 4813 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.188193 4813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.188045 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 06:28:38.280352294 +0000 UTC Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.188398 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189236 4813 factory.go:55] Registering systemd factory Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189268 4813 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189699 4813 factory.go:153] Registering CRI-O factory Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189735 4813 factory.go:221] Registration of the crio container factory successfully Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189820 4813 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189851 4813 factory.go:103] Registering Raw factory Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.189875 4813 manager.go:1196] Started watching for new ooms in manager Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.190335 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.188905 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f408be523ca9b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:29:08.168854171 +0000 UTC m=+0.656057397,LastTimestamp:2026-01-29 16:29:08.168854171 +0000 UTC m=+0.656057397,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.190671 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.191093 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.192462 4813 manager.go:319] Starting recovery of all containers Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.198605 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.198916 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.198939 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.198956 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199010 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199033 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199052 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199071 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199091 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199142 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199166 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199184 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199201 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199223 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199242 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199259 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199279 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199296 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199313 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199339 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199362 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199384 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199408 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199424 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199442 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199461 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199517 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199543 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199592 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199610 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199635 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199662 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199683 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199703 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199722 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199739 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199758 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199775 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199792 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199807 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199824 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199889 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199909 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199925 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199942 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199959 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199974 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.199993 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200013 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200031 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200051 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200076 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200096 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200136 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200181 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200201 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200219 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200237 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200254 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200272 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200297 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200315 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200339 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200364 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200382 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200398 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200414 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200430 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200447 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200464 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200482 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200500 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200515 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200533 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200550 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200566 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200584 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200601 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200618 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200644 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200662 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200678 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200695 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200713 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200737 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200755 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200773 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200791 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.200811 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204242 4813 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204293 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204325 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204345 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204363 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204380 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204413 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204431 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204610 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204633 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204664 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204683 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204701 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204720 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204740 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204779 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204800 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204819 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204838 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204857 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204876 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204904 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204930 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204948 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204972 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.204989 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205033 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205060 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205076 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205100 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205152 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205170 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205187 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205205 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205224 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205241 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205260 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205287 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205419 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205441 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205466 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205589 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205626 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205651 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205676 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205702 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205727 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205772 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205791 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205823 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205847 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205883 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205903 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205922 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205943 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205959 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205976 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.205993 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206012 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206028 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206045 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206063 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206093 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206130 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206152 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206176 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206194 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206219 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206247 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206265 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206284 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206303 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206324 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206351 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206371 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206387 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206405 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206422 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206449 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206467 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206652 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206849 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206907 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206957 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.206982 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207000 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207070 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207089 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207194 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207280 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207302 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207327 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207349 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207367 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207393 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207415 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207439 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207471 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207495 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207524 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207561 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207585 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207608 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207636 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207671 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207694 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207713 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.207730 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209596 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209627 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209666 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209684 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209709 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209725 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209739 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209763 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209777 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209799 4813 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209810 4813 reconstruct.go:97] "Volume reconstruction finished" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.209819 4813 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.219738 4813 manager.go:324] Recovery completed Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.232231 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.233827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.233887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.233904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.234784 4813 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.235084 4813 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.235122 4813 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.235328 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.238345 4813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.238381 4813 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.238407 4813 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.238448 4813 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.239069 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.239156 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.249044 4813 policy_none.go:49] "None policy: Start" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.249769 4813 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.249790 4813 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.294927 4813 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.330411 4813 manager.go:334] "Starting Device Plugin manager" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.330482 4813 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.330502 4813 server.go:79] "Starting device plugin registration server" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.330944 4813 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.331002 4813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.331323 4813 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.331442 4813 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.331459 4813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.339550 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.339661 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.340490 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.340724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.340754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.340765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.340874 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341240 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341291 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341562 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341777 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341888 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.341946 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342734 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.342644 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343541 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.343874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344062 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344193 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344233 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.344941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.345069 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.345094 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.346024 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.346053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.346062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.392257 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.411885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412018 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412045 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412241 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412438 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412575 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412767 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.412972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.413085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.413185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.431958 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.433403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.433457 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.433471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.433503 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.434059 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514463 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514595 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514688 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514772 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514797 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514837 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514895 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.514815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.635000 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.636447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.636510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.636524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.636565 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.637099 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.685382 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.694455 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.713914 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.719805 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: I0129 16:29:08.723549 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.748974 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5976586d9d1be63d73c6db7aff3a684f5da022632d5f31740ea28487ed4a434f WatchSource:0}: Error finding container 5976586d9d1be63d73c6db7aff3a684f5da022632d5f31740ea28487ed4a434f: Status 404 returned error can't find the container with id 5976586d9d1be63d73c6db7aff3a684f5da022632d5f31740ea28487ed4a434f Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.751251 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c64276eacfdc2b11058f031b2fbe681cc15d3167557cf4bc1099e28887e41912 WatchSource:0}: Error finding container c64276eacfdc2b11058f031b2fbe681cc15d3167557cf4bc1099e28887e41912: Status 404 returned error can't find the container with id c64276eacfdc2b11058f031b2fbe681cc15d3167557cf4bc1099e28887e41912 Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.757773 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-edbdd73cf22c91427e1f4a71659c57f5403df294789df3ab9573251089cf2dd5 WatchSource:0}: Error finding container edbdd73cf22c91427e1f4a71659c57f5403df294789df3ab9573251089cf2dd5: Status 404 returned error can't find the container with id edbdd73cf22c91427e1f4a71659c57f5403df294789df3ab9573251089cf2dd5 Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.760855 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-02d1e221cae088708bbaeaba2695b838f3c537a43fd399f81340eb24903c8c06 WatchSource:0}: Error finding container 02d1e221cae088708bbaeaba2695b838f3c537a43fd399f81340eb24903c8c06: Status 404 returned error can't find the container with id 02d1e221cae088708bbaeaba2695b838f3c537a43fd399f81340eb24903c8c06 Jan 29 16:29:08 crc kubenswrapper[4813]: W0129 16:29:08.761934 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8e45645b5a6130cf62e362df84d303a8fb59980df3fe1f8e8fc75bd47fba6889 WatchSource:0}: Error finding container 8e45645b5a6130cf62e362df84d303a8fb59980df3fe1f8e8fc75bd47fba6889: Status 404 returned error can't find the container with id 8e45645b5a6130cf62e362df84d303a8fb59980df3fe1f8e8fc75bd47fba6889 Jan 29 16:29:08 crc kubenswrapper[4813]: E0129 16:29:08.793255 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.037860 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.039752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.039795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.039808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.039838 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.040460 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Jan 29 16:29:09 crc kubenswrapper[4813]: W0129 16:29:09.130557 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.130656 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.178555 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.188555 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:48:17.419074413 +0000 UTC Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.244897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e45645b5a6130cf62e362df84d303a8fb59980df3fe1f8e8fc75bd47fba6889"} Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.246080 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"02d1e221cae088708bbaeaba2695b838f3c537a43fd399f81340eb24903c8c06"} Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.247995 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edbdd73cf22c91427e1f4a71659c57f5403df294789df3ab9573251089cf2dd5"} Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.249335 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c64276eacfdc2b11058f031b2fbe681cc15d3167557cf4bc1099e28887e41912"} Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.251277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5976586d9d1be63d73c6db7aff3a684f5da022632d5f31740ea28487ed4a434f"} Jan 29 16:29:09 crc kubenswrapper[4813]: W0129 16:29:09.378557 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.378753 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:09 crc kubenswrapper[4813]: W0129 16:29:09.447002 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.447137 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:09 crc kubenswrapper[4813]: W0129 16:29:09.470212 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.470289 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.594816 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.840687 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.842872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.842909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.842918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:09 crc kubenswrapper[4813]: I0129 16:29:09.842964 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:09 crc kubenswrapper[4813]: E0129 16:29:09.843388 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.179305 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.189511 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:12:24.916232248 +0000 UTC Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.191945 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:29:10 crc kubenswrapper[4813]: E0129 16:29:10.194183 4813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.259260 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62" exitCode=0 Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.259444 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.259385 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.260901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.260942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.260953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.261937 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920" exitCode=0 Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.262074 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.262184 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.262740 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.263808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.263844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.263852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.263962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.264011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.264033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.265428 4813 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1dbe29c28c16c3039ab1cee8e50cf14ad33e315a5040536d7a745fc4578f5ae7" exitCode=0 Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.265516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1dbe29c28c16c3039ab1cee8e50cf14ad33e315a5040536d7a745fc4578f5ae7"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.265585 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.266606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.266634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.266650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.269689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.269745 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.269762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.269709 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.269778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.272472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.272529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.272544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.275639 4813 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554" exitCode=0 Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.275693 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554"} Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.275727 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.276613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.276657 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:10 crc kubenswrapper[4813]: I0129 16:29:10.276683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: W0129 16:29:11.001192 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.001333 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.178960 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.189941 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:52:03.28483632 +0000 UTC Jan 29 16:29:11 crc kubenswrapper[4813]: W0129 16:29:11.194893 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.194983 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.195819 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Jan 29 16:29:11 crc kubenswrapper[4813]: W0129 16:29:11.272935 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.273055 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281241 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281319 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281330 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.281450 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.282413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.282441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.282451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.284945 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8" exitCode=0 Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.285008 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.285139 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.289134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.289202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.289226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.293433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aa3c5ac0faa8c01066dfa9472e84f08191703ef0ce7c747bf46f870b3bbd53c7"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.293483 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.294756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.294794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.294808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.298449 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.298778 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299055 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f"} Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299572 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.299584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.300342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.300366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.300377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: W0129 16:29:11.318900 4813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.47:6443: connect: connection refused Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.319017 4813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.47:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.444420 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.445835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.445864 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.445874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.445897 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:11 crc kubenswrapper[4813]: E0129 16:29:11.446414 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.47:6443: connect: connection refused" node="crc" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.581898 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:11 crc kubenswrapper[4813]: I0129 16:29:11.898560 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.190332 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:31:46.416740097 +0000 UTC Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303414 4813 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e" exitCode=0 Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303520 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303531 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303547 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303653 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e"} Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303806 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.303960 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.304833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.304872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.304886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.304965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.304994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.305789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:12 crc kubenswrapper[4813]: I0129 16:29:12.866239 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.191310 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:38:18.725453824 +0000 UTC Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318817 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c"} Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318899 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1"} Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318926 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88"} Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318934 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318946 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b"} Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.318968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6"} Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.319207 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.319684 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.320828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.321800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.321843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:13 crc kubenswrapper[4813]: I0129 16:29:13.321858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.192334 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:10:19.471952976 +0000 UTC Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.272996 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.321797 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.321803 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.323534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.323590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.323606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.323878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.324260 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.324406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.531565 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.544005 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.623838 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.624012 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.625424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.625468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.625479 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.647077 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.648593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.648651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.648665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.648694 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.749389 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.756580 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.772153 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.899380 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:29:14 crc kubenswrapper[4813]: I0129 16:29:14.899472 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.192783 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:55:08.749247287 +0000 UTC Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.324443 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.324454 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.324608 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.325334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.325361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.325372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.325981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.326001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.326065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.326040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.326078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.326096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.825073 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.825351 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.826815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.826874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:15 crc kubenswrapper[4813]: I0129 16:29:15.826887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:16 crc kubenswrapper[4813]: I0129 16:29:16.193623 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:41:18.678566865 +0000 UTC Jan 29 16:29:16 crc kubenswrapper[4813]: I0129 16:29:16.327069 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:16 crc kubenswrapper[4813]: I0129 16:29:16.328281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:16 crc kubenswrapper[4813]: I0129 16:29:16.328338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:16 crc kubenswrapper[4813]: I0129 16:29:16.328351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:17 crc kubenswrapper[4813]: I0129 16:29:17.193961 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:09:18.554434594 +0000 UTC Jan 29 16:29:18 crc kubenswrapper[4813]: I0129 16:29:18.195012 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:04:46.145362484 +0000 UTC Jan 29 16:29:18 crc kubenswrapper[4813]: E0129 16:29:18.340700 4813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.195862 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:56:30.296298505 +0000 UTC Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.617336 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.617612 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.619386 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.619456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:19 crc kubenswrapper[4813]: I0129 16:29:19.619475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:20 crc kubenswrapper[4813]: I0129 16:29:20.196395 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:48:22.182329578 +0000 UTC Jan 29 16:29:21 crc kubenswrapper[4813]: I0129 16:29:21.197095 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:00:44.117763308 +0000 UTC Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.175618 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45052->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.175711 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45052->192.168.126.11:17697: read: connection reset by peer" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.179008 4813 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.197635 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:33:35.764561949 +0000 UTC Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.329050 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.329168 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.333282 4813 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.333382 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.346941 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.349622 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9" exitCode=255 Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.349691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9"} Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.349897 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.351010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.351046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.351055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.351585 4813 scope.go:117] "RemoveContainer" containerID="122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.872206 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.872367 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.873940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.874004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:22 crc kubenswrapper[4813]: I0129 16:29:22.874025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.198042 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:26:37.616332613 +0000 UTC Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.354456 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.356453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9"} Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.356640 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.357553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.357587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:23 crc kubenswrapper[4813]: I0129 16:29:23.357598 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.198974 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:23:35.852200252 +0000 UTC Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.778072 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.778279 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.778379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.779838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.779892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.779904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.782171 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.899822 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:29:24 crc kubenswrapper[4813]: I0129 16:29:24.900252 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:29:25 crc kubenswrapper[4813]: I0129 16:29:25.199840 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:48:42.489284553 +0000 UTC Jan 29 16:29:25 crc kubenswrapper[4813]: I0129 16:29:25.363864 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:25 crc kubenswrapper[4813]: I0129 16:29:25.364998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:25 crc kubenswrapper[4813]: I0129 16:29:25.365027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:25 crc kubenswrapper[4813]: I0129 16:29:25.365036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:26 crc kubenswrapper[4813]: I0129 16:29:26.200964 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:15:58.828896642 +0000 UTC Jan 29 16:29:26 crc kubenswrapper[4813]: I0129 16:29:26.367225 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:26 crc kubenswrapper[4813]: I0129 16:29:26.369085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:26 crc kubenswrapper[4813]: I0129 16:29:26.369155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:26 crc kubenswrapper[4813]: I0129 16:29:26.369177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.201959 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:43:27.85930323 +0000 UTC Jan 29 16:29:27 crc kubenswrapper[4813]: E0129 16:29:27.312985 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.318897 4813 trace.go:236] Trace[1793214783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:29:16.506) (total time: 10812ms): Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[1793214783]: ---"Objects listed" error: 10812ms (16:29:27.318) Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[1793214783]: [10.812214373s] [10.812214373s] END Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.318956 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:29:27 crc kubenswrapper[4813]: E0129 16:29:27.319926 4813 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.322235 4813 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.322324 4813 trace.go:236] Trace[2034821991]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:29:17.078) (total time: 10244ms): Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[2034821991]: ---"Objects listed" error: 10244ms (16:29:27.322) Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[2034821991]: [10.244164072s] [10.244164072s] END Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.322360 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.323206 4813 trace.go:236] Trace[1150491761]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 16:29:15.953) (total time: 11369ms): Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[1150491761]: ---"Objects listed" error: 11369ms (16:29:27.323) Jan 29 16:29:27 crc kubenswrapper[4813]: Trace[1150491761]: [11.369390679s] [11.369390679s] END Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.323236 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.326054 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:29:27 crc kubenswrapper[4813]: I0129 16:29:27.330353 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.167375 4813 apiserver.go:52] "Watching apiserver" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.171139 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.171365 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.171819 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.172176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.172228 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.172448 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.172495 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.172509 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.172682 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.172738 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.172802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.173954 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.173985 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.176676 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.177049 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.177544 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.177787 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.178241 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.178836 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.179084 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.189383 4813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.201073 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.202973 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:58:59.409730428 +0000 UTC Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.211544 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.222181 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228200 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228389 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228608 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228689 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.229042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.229056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.229299 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.229566 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.229848 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.229945 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:28.729618643 +0000 UTC m=+21.216821849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.228776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230386 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230419 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230466 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230513 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230613 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230737 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230789 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230936 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230959 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.230983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231056 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231083 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231083 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231109 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231152 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231173 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231217 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231238 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231264 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231285 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231304 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231321 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231338 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231351 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231393 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231441 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231486 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231537 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231564 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231586 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231695 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231767 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231793 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231893 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231916 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231992 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232014 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232034 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232089 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232190 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232230 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232271 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232292 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232318 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232376 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232398 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232446 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232469 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232515 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232537 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232557 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232632 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232718 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232739 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232765 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232787 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232809 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232830 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232933 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232954 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232973 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232994 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233034 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233098 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231589 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231718 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.231741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.232375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233295 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233327 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233378 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233401 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233450 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233470 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233495 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233517 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233543 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233584 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233671 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233736 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233758 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233803 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233827 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233850 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233923 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233946 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233966 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234018 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234062 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234082 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234142 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234213 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234259 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234375 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234447 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234472 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234492 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234553 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234679 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234703 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234809 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234833 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234856 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234880 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234924 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234969 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235087 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235113 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235197 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235248 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235290 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235314 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235339 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235359 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235384 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235435 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235496 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235909 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235949 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236032 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236044 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236055 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236065 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236075 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236086 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236099 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236117 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236146 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236159 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236173 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236187 4813 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236200 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236216 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236229 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236246 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236260 4813 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236274 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236287 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236300 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236312 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236325 4813 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236338 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236351 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236362 4813 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236374 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236386 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237300 4813 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.246362 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.258785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233553 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233759 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.233841 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234074 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234269 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234289 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234301 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234532 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.234615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235510 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.235834 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236390 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236618 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236767 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.236885 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237087 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237193 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237284 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237556 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237655 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.237939 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238495 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238505 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238449 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.239000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.239038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.238980 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.240883 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.241276 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.241678 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242296 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242345 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242367 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242872 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.242914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.243172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.243276 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.243337 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.244469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.244916 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.246197 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.247772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.247661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.248380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.248430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.248480 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.248900 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.248965 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.249690 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.249783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.250221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.249703 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.257744 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.261054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.261303 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:28.7612749 +0000 UTC m=+21.248478116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.261733 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:28.761699143 +0000 UTC m=+21.248902369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.262318 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.262374 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.262394 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.260067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.263006 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:28.76297435 +0000 UTC m=+21.250177566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.264877 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.270034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.271346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.271577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.271531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.271829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.272231 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.272513 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.272619 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.272856 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.273086 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.273109 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.272969 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.273208 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:28.773176945 +0000 UTC m=+21.260380181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.273471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.273709 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.274162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.274428 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.275857 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.275951 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.276297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.275829 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.276949 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.276949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.277232 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.277392 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.277457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278588 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.278275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.279265 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.279423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.279621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.279801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.283698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.284061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.284369 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.284540 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.284745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.285001 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.285301 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.285488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.286086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.286507 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.286749 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.286989 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.287686 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.287734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.287952 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288236 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288306 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288424 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.287756 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288600 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288748 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288755 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288810 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.288948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.289049 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.289261 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.289650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.289776 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.289866 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290536 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290580 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290664 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290960 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.291203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.290814 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.291351 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.291512 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.291677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.292032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.292545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.292839 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.292971 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.293240 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.293865 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.293966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294035 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294137 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294176 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.292152 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294230 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294286 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294581 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294806 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.294817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295298 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295666 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.295897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.296785 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.296677 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.297736 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.302094 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.302157 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.303333 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.304031 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.304725 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.305692 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.306512 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.309484 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.309621 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.311388 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.315400 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.317223 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.318921 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.325401 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.325653 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.327597 4813 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.327791 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.328688 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.331906 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.334934 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.335537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.336807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.336879 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.336961 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.336978 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.336992 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337004 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337015 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337026 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337039 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337051 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337062 4813 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337075 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337085 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337096 4813 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337113 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337139 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337151 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337161 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337173 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337186 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337197 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337210 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337222 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337234 4813 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337247 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337260 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337275 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337284 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337293 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337303 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337312 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337322 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337331 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337339 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337349 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337361 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337373 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337385 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337399 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337378 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337549 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337567 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337580 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337591 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337601 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337611 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337620 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337630 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337639 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337668 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337683 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337695 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337706 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337764 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337798 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337807 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337815 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337825 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337851 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337862 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337873 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337884 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337896 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337908 4813 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337942 4813 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.337990 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338005 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338328 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338335 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338347 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338362 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338376 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338390 4813 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338403 4813 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338416 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338429 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338443 4813 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338456 4813 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338488 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338527 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338872 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338888 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338900 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338937 4813 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338957 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.338971 4813 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339032 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339047 4813 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339063 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339076 4813 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339090 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339102 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339603 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339618 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339630 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339643 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339710 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339723 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339735 4813 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339748 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339798 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339812 4813 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339825 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339838 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339850 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339886 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339901 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339934 4813 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339949 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339961 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339962 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.339974 4813 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340022 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340059 4813 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340073 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340086 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340101 4813 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340118 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340156 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340167 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340179 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340190 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340206 4813 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340220 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340231 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340242 4813 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340255 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340268 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340279 4813 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340291 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340302 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340313 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340325 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340337 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340349 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340362 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340375 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340387 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340399 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340412 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340424 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340436 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340448 4813 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340461 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340475 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340487 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340498 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340510 4813 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340522 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340534 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340546 4813 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340557 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340567 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340578 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340589 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340602 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340613 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340626 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340638 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340650 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340662 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340675 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340688 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340715 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340728 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340741 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340752 4813 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340764 4813 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340777 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340791 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340803 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.340859 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.341835 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.343391 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.344183 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.345503 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.346501 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.347102 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.348116 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.348884 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.349883 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.350422 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.351348 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.351825 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.353033 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.353607 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.354587 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.354931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.355084 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.356227 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.356925 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.357660 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.365795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.379832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.380154 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.381861 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.385149 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" exitCode=255 Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.385194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9"} Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.385248 4813 scope.go:117] "RemoveContainer" containerID="122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.393402 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.411334 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.411762 4813 scope.go:117] "RemoveContainer" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.411884 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.412038 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.425904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.441754 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.462035 4813 csr.go:261] certificate signing request csr-4sl25 is approved, waiting to be issued Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.462008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.477269 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.487223 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.494538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.501290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.505490 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.520818 4813 csr.go:257] certificate signing request csr-4sl25 is issued Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.542161 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.743816 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.744032 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:29.743999223 +0000 UTC m=+22.231202459 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.844835 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.844915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.844941 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:28 crc kubenswrapper[4813]: I0129 16:29:28.844988 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845064 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845102 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845166 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845179 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845230 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:29.845213787 +0000 UTC m=+22.332417003 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845266 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:29.845248258 +0000 UTC m=+22.332451494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845363 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845417 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:29.845407083 +0000 UTC m=+22.332610369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845526 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845568 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845584 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:28 crc kubenswrapper[4813]: E0129 16:29:28.845618 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:29.845608418 +0000 UTC m=+22.332811704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.203150 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:15:31.049699181 +0000 UTC Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.387850 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-srj6p"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.388356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.389432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa5fd6410171a9e39022cf3fe5564efc76393626b7b2673f0daf7b28704f3cfa"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.389662 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-r269r"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.390113 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.391960 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.392337 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.393250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.393305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.393320 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa8e8b40c53927a5a28df97410d9fb6497a4f37342ff79e292839ba2ba6248ba"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.393462 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.393797 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.394031 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.394068 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.394790 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.395349 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.398243 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.398288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5ef8a17664a8138d7637157241dcb6639bef074e8e0c912dde83c62f7d7fe247"} Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.400229 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.402450 4813 scope.go:117] "RemoveContainer" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.402594 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.418486 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.432886 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.443658 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451253 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71cdf350-59d3-4d6f-8995-173528429b59-proxy-tls\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71cdf350-59d3-4d6f-8995-173528429b59-mcd-auth-proxy-config\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhjg\" (UniqueName: \"kubernetes.io/projected/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-kube-api-access-thhjg\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-hosts-file\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71cdf350-59d3-4d6f-8995-173528429b59-rootfs\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.451838 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pfvf\" (UniqueName: \"kubernetes.io/projected/71cdf350-59d3-4d6f-8995-173528429b59-kube-api-access-5pfvf\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.458538 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.473913 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122252dcc5afcc1ffc531de572de4e330e552a9d6a9d962d59805299bf36aed9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:22Z\\\",\\\"message\\\":\\\"W0129 16:29:11.398096 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 16:29:11.398533 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769704151 cert, and key in /tmp/serving-cert-3973945379/serving-signer.crt, /tmp/serving-cert-3973945379/serving-signer.key\\\\nI0129 16:29:11.588690 1 observer_polling.go:159] Starting file observer\\\\nW0129 16:29:11.592994 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 16:29:11.593208 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:11.595420 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3973945379/tls.crt::/tmp/serving-cert-3973945379/tls.key\\\\\\\"\\\\nF0129 16:29:22.161423 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.487626 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.507094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.521175 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.522156 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 16:24:28 +0000 UTC, rotation deadline is 2026-11-06 12:55:18.594259837 +0000 UTC Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.522251 4813 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6740h25m49.072013815s for next certificate rotation Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.535686 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.549344 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552625 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71cdf350-59d3-4d6f-8995-173528429b59-proxy-tls\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thhjg\" (UniqueName: \"kubernetes.io/projected/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-kube-api-access-thhjg\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552696 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71cdf350-59d3-4d6f-8995-173528429b59-mcd-auth-proxy-config\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-hosts-file\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pfvf\" (UniqueName: \"kubernetes.io/projected/71cdf350-59d3-4d6f-8995-173528429b59-kube-api-access-5pfvf\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71cdf350-59d3-4d6f-8995-173528429b59-rootfs\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/71cdf350-59d3-4d6f-8995-173528429b59-rootfs\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.552900 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-hosts-file\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.553625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71cdf350-59d3-4d6f-8995-173528429b59-mcd-auth-proxy-config\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.561080 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71cdf350-59d3-4d6f-8995-173528429b59-proxy-tls\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.563275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.569632 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thhjg\" (UniqueName: \"kubernetes.io/projected/b9b262d6-5205-4aaf-85df-6ac3c03c5d93-kube-api-access-thhjg\") pod \"node-resolver-srj6p\" (UID: \"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\") " pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.569935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pfvf\" (UniqueName: \"kubernetes.io/projected/71cdf350-59d3-4d6f-8995-173528429b59-kube-api-access-5pfvf\") pod \"machine-config-daemon-r269r\" (UID: \"71cdf350-59d3-4d6f-8995-173528429b59\") " pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.582130 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.596690 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.611094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.627566 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.641898 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.644957 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.660766 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.661079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.662809 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.679073 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.696410 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.708771 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.719239 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-srj6p" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.725680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.728305 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.747198 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.754239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.754503 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:31.754454453 +0000 UTC m=+24.241657769 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.763387 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.784389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.787892 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-7cjx7"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.788299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.792260 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.792261 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.795168 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj2h6"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.796092 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.796817 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.796911 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-58k2s"] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.796973 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.797143 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.803024 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.803525 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.803597 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.803783 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.803956 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.804194 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.804398 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.804464 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.806993 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.808448 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.816385 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.834883 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.854930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855078 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgjg\" (UniqueName: \"kubernetes.io/projected/4acefc9f-f68a-4566-a0f5-656b961d4267-kube-api-access-shgjg\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855174 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-binary-copy\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855216 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-os-release\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855313 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-os-release\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855343 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-system-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855361 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-etc-kubernetes\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-tuning-conf-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855395 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855449 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqh4b\" (UniqueName: \"kubernetes.io/projected/2363cfc2-15b2-44f3-bd87-0e37a79ab157-kube-api-access-lqh4b\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-daemon-config\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855541 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-socket-dir-parent\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855586 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-netns\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cnibin\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855626 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-cni-binary-copy\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-k8s-cni-cncf-io\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855675 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-system-cni-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-cnibin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-bin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855767 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt4dq\" (UniqueName: \"kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-multus\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855818 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855863 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.855895 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-conf-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856341 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-kubelet\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856017 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856234 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856398 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856413 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856241 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856433 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-multus-certs\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856467 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:31.856438009 +0000 UTC m=+24.343641405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856538 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:31.856526842 +0000 UTC m=+24.343730248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.856594 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:31.856585363 +0000 UTC m=+24.343788789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856741 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.856791 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-hostroot\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.857004 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.857022 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.857035 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:29 crc kubenswrapper[4813]: E0129 16:29:29.857079 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:31.857068737 +0000 UTC m=+24.344271953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.857617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.876157 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.893338 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.906074 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.930852 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.944735 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957341 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957376 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-os-release\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957445 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-os-release\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957462 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957504 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-system-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957579 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-etc-kubernetes\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-tuning-conf-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957650 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqh4b\" (UniqueName: \"kubernetes.io/projected/2363cfc2-15b2-44f3-bd87-0e37a79ab157-kube-api-access-lqh4b\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957723 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-netns\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-os-release\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-daemon-config\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957855 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957886 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957894 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-etc-kubernetes\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-socket-dir-parent\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cnibin\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957958 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-socket-dir-parent\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957991 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957460 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-system-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958036 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-system-cni-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cnibin\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-tuning-conf-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-system-cni-dir\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958246 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-cni-binary-copy\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-k8s-cni-cncf-io\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958292 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-bin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958316 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-cnibin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958362 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958398 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt4dq\" (UniqueName: \"kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958450 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-multus\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-conf-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958500 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958525 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958574 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-daemon-config\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-kubelet\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-hostroot\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958638 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-multus-certs\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-netns\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-cnibin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958716 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958727 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-k8s-cni-cncf-io\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958753 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shgjg\" (UniqueName: \"kubernetes.io/projected/4acefc9f-f68a-4566-a0f5-656b961d4267-kube-api-access-shgjg\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958819 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-binary-copy\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-bin\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.957702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2363cfc2-15b2-44f3-bd87-0e37a79ab157-os-release\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959020 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-cni-multus\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-cni-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4acefc9f-f68a-4566-a0f5-656b961d4267-cni-binary-copy\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-multus-conf-dir\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-var-lib-kubelet\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-hostroot\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959274 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4acefc9f-f68a-4566-a0f5-656b961d4267-host-run-multus-certs\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959288 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.958617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.959710 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2363cfc2-15b2-44f3-bd87-0e37a79ab157-cni-binary-copy\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.963457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.963474 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.972325 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.975874 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt4dq\" (UniqueName: \"kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq\") pod \"ovnkube-node-cj2h6\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.976661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shgjg\" (UniqueName: \"kubernetes.io/projected/4acefc9f-f68a-4566-a0f5-656b961d4267-kube-api-access-shgjg\") pod \"multus-7cjx7\" (UID: \"4acefc9f-f68a-4566-a0f5-656b961d4267\") " pod="openshift-multus/multus-7cjx7" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.985555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqh4b\" (UniqueName: \"kubernetes.io/projected/2363cfc2-15b2-44f3-bd87-0e37a79ab157-kube-api-access-lqh4b\") pod \"multus-additional-cni-plugins-58k2s\" (UID: \"2363cfc2-15b2-44f3-bd87-0e37a79ab157\") " pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:29 crc kubenswrapper[4813]: I0129 16:29:29.991245 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:29Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.034562 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.067777 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.085352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.105683 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.116877 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-7cjx7" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.125332 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.126572 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.135909 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-58k2s" Jan 29 16:29:30 crc kubenswrapper[4813]: W0129 16:29:30.138840 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ba0a14_c0cd_40c8_ab31_e106a7d0b0e5.slice/crio-e82f7b6f58e9cb0911dfe6a2065c085032b3655729c62abc9462f1100edced2a WatchSource:0}: Error finding container e82f7b6f58e9cb0911dfe6a2065c085032b3655729c62abc9462f1100edced2a: Status 404 returned error can't find the container with id e82f7b6f58e9cb0911dfe6a2065c085032b3655729c62abc9462f1100edced2a Jan 29 16:29:30 crc kubenswrapper[4813]: W0129 16:29:30.157102 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2363cfc2_15b2_44f3_bd87_0e37a79ab157.slice/crio-9383e370ae55e5957d3525439b535835bb7fe08998a9a19ce63bee55170d3ca0 WatchSource:0}: Error finding container 9383e370ae55e5957d3525439b535835bb7fe08998a9a19ce63bee55170d3ca0: Status 404 returned error can't find the container with id 9383e370ae55e5957d3525439b535835bb7fe08998a9a19ce63bee55170d3ca0 Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.203313 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:06:04.938898367 +0000 UTC Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.239566 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:30 crc kubenswrapper[4813]: E0129 16:29:30.239731 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.240273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:30 crc kubenswrapper[4813]: E0129 16:29:30.240346 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.240494 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:30 crc kubenswrapper[4813]: E0129 16:29:30.240560 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.244374 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.245687 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.247413 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.250842 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.251865 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.253887 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.254623 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.256075 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.257709 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.259498 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.260726 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.261808 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.262483 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.263529 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.406656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerStarted","Data":"f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.406726 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerStarted","Data":"f03727050ef199e39c5c80631acf97b65c789e6c4eba15a412338fa0ce420ee2"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.408006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerStarted","Data":"8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.408056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerStarted","Data":"9383e370ae55e5957d3525439b535835bb7fe08998a9a19ce63bee55170d3ca0"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.409859 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152" exitCode=0 Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.409913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.409957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"e82f7b6f58e9cb0911dfe6a2065c085032b3655729c62abc9462f1100edced2a"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.413023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.413082 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.413099 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"f5d84ea34b7b2208a6df1fb664cf80240bc8e4a6fa396e464ff4a1881ed6b0eb"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.415376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-srj6p" event={"ID":"b9b262d6-5205-4aaf-85df-6ac3c03c5d93","Type":"ContainerStarted","Data":"52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.415453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-srj6p" event={"ID":"b9b262d6-5205-4aaf-85df-6ac3c03c5d93","Type":"ContainerStarted","Data":"94b6e84e37de8a5feeeab4a4cf3d6da350b0b32baff9d2498c097752ec423f0b"} Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.433277 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.458094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.499031 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.535143 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.552811 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.585634 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.611284 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.627976 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.645639 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.668516 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.681852 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.695918 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.712550 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.727456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.749635 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.772914 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.791880 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.816623 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.843297 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.861434 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.877140 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.898246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.919285 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.933229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.946426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:30 crc kubenswrapper[4813]: I0129 16:29:30.962565 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:30Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.203577 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:04:04.32983224 +0000 UTC Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.422043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b"} Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.422446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74"} Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.422462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d"} Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.423933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c"} Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.426551 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da" exitCode=0 Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.426592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da"} Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.441278 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.457742 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.470796 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.492352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.510593 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.524389 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.543379 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.560997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.577363 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.590832 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.612229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.628835 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.645390 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.661628 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.680780 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.699271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.717855 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.744001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.767461 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.779625 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.779936 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:35.779883204 +0000 UTC m=+28.267086420 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.783572 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.805864 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.816630 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.834465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.865489 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.880706 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.880777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.880812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.880839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.880998 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881018 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881031 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881035 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881103 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:35.881085528 +0000 UTC m=+28.368288744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:35.881200051 +0000 UTC m=+28.368403447 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881274 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881297 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881314 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:35.881303484 +0000 UTC m=+28.368506700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881321 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881342 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:31 crc kubenswrapper[4813]: E0129 16:29:31.881417 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:35.881385076 +0000 UTC m=+28.368588322 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.882576 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.897917 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.903428 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.912217 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.915113 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.920077 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.934279 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.948310 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.969462 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:31 crc kubenswrapper[4813]: I0129 16:29:31.990211 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:31Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.006077 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.019055 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.034718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.064012 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.125442 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.162544 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.177783 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.193785 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.204291 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:45:51.995296534 +0000 UTC Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.206430 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.226445 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.240216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.240380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:32 crc kubenswrapper[4813]: E0129 16:29:32.240454 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:32 crc kubenswrapper[4813]: E0129 16:29:32.240375 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.240249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:32 crc kubenswrapper[4813]: E0129 16:29:32.240541 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.262283 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.302339 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.339683 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.380811 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.419754 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.433415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd"} Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.433467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342"} Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.433481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96"} Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.435563 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86" exitCode=0 Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.435642 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86"} Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.459689 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.501258 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.546580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.579546 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.621939 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.660087 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.699464 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.740842 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.778163 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.826492 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.860155 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.891433 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7rrb4"] Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.891906 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.905311 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:32Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.910946 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.930609 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.950653 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.971656 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.989827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7csg\" (UniqueName: \"kubernetes.io/projected/beb9c786-b3de-45db-8f65-712bcbbe8709-kube-api-access-p7csg\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.990012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beb9c786-b3de-45db-8f65-712bcbbe8709-host\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:32 crc kubenswrapper[4813]: I0129 16:29:32.990064 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/beb9c786-b3de-45db-8f65-712bcbbe8709-serviceca\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.020816 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.060253 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.090701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7csg\" (UniqueName: \"kubernetes.io/projected/beb9c786-b3de-45db-8f65-712bcbbe8709-kube-api-access-p7csg\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.090793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beb9c786-b3de-45db-8f65-712bcbbe8709-host\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.090811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/beb9c786-b3de-45db-8f65-712bcbbe8709-serviceca\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.091027 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/beb9c786-b3de-45db-8f65-712bcbbe8709-host\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.091928 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/beb9c786-b3de-45db-8f65-712bcbbe8709-serviceca\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.099949 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.134709 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7csg\" (UniqueName: \"kubernetes.io/projected/beb9c786-b3de-45db-8f65-712bcbbe8709-kube-api-access-p7csg\") pod \"node-ca-7rrb4\" (UID: \"beb9c786-b3de-45db-8f65-712bcbbe8709\") " pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.161187 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.203187 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7rrb4" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.204541 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:32:27.932735698 +0000 UTC Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.210617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.239149 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.277680 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.320794 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.359678 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.406160 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.442815 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rrb4" event={"ID":"beb9c786-b3de-45db-8f65-712bcbbe8709","Type":"ContainerStarted","Data":"d0598b40ee999e5fe296317911c4cc28e499144a17b6472528c8ccd279b0361d"} Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.443718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.445778 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96" exitCode=0 Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.445835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96"} Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.478837 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.526391 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.562463 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.598551 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.639651 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.681495 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.720826 4813 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.723009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.723552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.723628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.723646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.723817 4813 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.795346 4813 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.795548 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.795749 4813 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.797111 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.797166 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.797179 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.797197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.797211 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.815483 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.819791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.819842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.819861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.819884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.819898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.836966 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.841337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.841389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.841400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.841420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.841433 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.848080 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.859270 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.864018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.864070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.864082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.864103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.864145 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.877580 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.880348 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.882822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.882870 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.882881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.882898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.882908 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.899990 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: E0129 16:29:33.900466 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.902295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.902407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.902470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.902538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.902602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:33Z","lastTransitionTime":"2026-01-29T16:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.919880 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:33 crc kubenswrapper[4813]: I0129 16:29:33.957817 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.001511 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:33Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.005302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.005346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.005358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.005380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.005393 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.040856 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.084608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.108532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.108588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.108599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.108616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.108630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.119546 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.164887 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.205302 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:24:45.696560081 +0000 UTC Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.207835 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.211038 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.211086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.211099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.211137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.211150 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.239644 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:34 crc kubenswrapper[4813]: E0129 16:29:34.239818 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.240018 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.240106 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:34 crc kubenswrapper[4813]: E0129 16:29:34.240186 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:34 crc kubenswrapper[4813]: E0129 16:29:34.240282 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.241991 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.279215 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.313805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.313846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.313859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.313881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.313894 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.322104 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.363804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.396545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.416150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.416188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.416198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.416214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.416231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.440020 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.451462 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd" exitCode=0 Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.451682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.453990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7rrb4" event={"ID":"beb9c786-b3de-45db-8f65-712bcbbe8709","Type":"ContainerStarted","Data":"b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.458592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.476974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.477731 4813 scope.go:117] "RemoveContainer" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" Jan 29 16:29:34 crc kubenswrapper[4813]: E0129 16:29:34.477904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.480537 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.518871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.518915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.518924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.518940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.519023 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.521051 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.560349 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.607078 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.622140 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.622183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.622195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.622210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.622222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.646567 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.681738 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.720546 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.726077 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.726184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.726206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.726234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.726253 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.762383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.801312 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.829911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.829964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.829979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.830000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.830018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.845554 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.882983 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.919032 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.932126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.932151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.932159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.932175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.932188 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:34Z","lastTransitionTime":"2026-01-29T16:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:34 crc kubenswrapper[4813]: I0129 16:29:34.963473 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:34Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.009894 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.038854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.038896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.038905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.038920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.038930 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.044894 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.078624 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.120710 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.141740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.142059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.142193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.142324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.142374 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.160255 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.198462 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.205458 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:48:18.064074398 +0000 UTC Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.245367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.245411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.245424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.245441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.245457 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.348904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.348948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.348962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.348982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.348994 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.451171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.451206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.451216 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.451232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.451242 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.463916 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9" exitCode=0 Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.464480 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.479368 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.494749 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.517454 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.534168 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.554014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.554072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.554085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.554128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.554143 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.558188 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.572583 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.586410 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.599425 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.613962 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.634294 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.648936 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.656937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.656977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.656989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.657008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.657019 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.679484 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.719986 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.760814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.761203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.761303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.761407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.761487 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.762882 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.799926 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.820559 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.820794 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.820770405 +0000 UTC m=+36.307973621 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.864186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.864229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.864240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.864256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.864267 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.922516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.922602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.922632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.922665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922745 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922819 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922844 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922853 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.922831194 +0000 UTC m=+36.410034400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922858 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922854 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922886 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922880 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922896 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.922885205 +0000 UTC m=+36.410088611 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.922904 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.923012 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.922992868 +0000 UTC m=+36.410196084 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:35 crc kubenswrapper[4813]: E0129 16:29:35.923121 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.923083601 +0000 UTC m=+36.410286817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.967096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.967156 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.967167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.967184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:35 crc kubenswrapper[4813]: I0129 16:29:35.967194 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:35Z","lastTransitionTime":"2026-01-29T16:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.070076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.070162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.070181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.070206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.070225 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.172915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.172975 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.172989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.173016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.173032 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.206190 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:49:46.475075182 +0000 UTC Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.239711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.239776 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.239781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:36 crc kubenswrapper[4813]: E0129 16:29:36.239900 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:36 crc kubenswrapper[4813]: E0129 16:29:36.240059 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:36 crc kubenswrapper[4813]: E0129 16:29:36.240184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.275206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.275250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.275262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.275282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.275296 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.378318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.378347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.378356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.378372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.378383 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.473728 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.474089 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481032 4813 generic.go:334] "Generic (PLEG): container finished" podID="2363cfc2-15b2-44f3-bd87-0e37a79ab157" containerID="50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e" exitCode=0 Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerDied","Data":"50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481811 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.481875 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.486988 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.506514 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.506543 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.521616 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.540804 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.562566 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.578429 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.583839 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.583892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.583901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.583919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.583954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.591958 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.609047 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.624210 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.637664 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.652985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.669936 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686779 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.686947 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.702333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.721559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.734990 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.747890 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.758002 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.771587 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.789602 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.789651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.789662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.789681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.789698 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.791459 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.807420 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.821656 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.836034 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.850829 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.862945 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.883588 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.891961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.892005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.892042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.892061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.892077 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.896725 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.921756 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.959350 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.994287 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.994321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.994330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.994345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.994354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:36Z","lastTransitionTime":"2026-01-29T16:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:36 crc kubenswrapper[4813]: I0129 16:29:36.999009 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:36Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.096899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.096941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.096952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.096968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.096979 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.200317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.200361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.200371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.200387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.200401 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.206892 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:35:01.810412645 +0000 UTC Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.303374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.303415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.303425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.303442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.303452 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.406857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.406901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.406918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.406941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.406956 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.490177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" event={"ID":"2363cfc2-15b2-44f3-bd87-0e37a79ab157","Type":"ContainerStarted","Data":"16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.490306 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.491017 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.506718 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.510601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.510647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.510658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.510676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.510691 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.518939 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.525012 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.537245 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.553606 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.572168 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.586955 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.600001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.613182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.613236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.613251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.613300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.613338 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.617151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.632474 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.645772 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.660527 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.682239 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.699025 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.715483 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.716150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.716241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.716309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.716376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.716453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.730963 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.752585 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.766882 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.779752 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.796573 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.812686 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.819071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.819137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.819148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.819165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.819175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.838985 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.884596 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.922734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.922771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.922781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.922799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.922810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:37Z","lastTransitionTime":"2026-01-29T16:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.952301 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.974979 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:37 crc kubenswrapper[4813]: I0129 16:29:37.999417 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:37Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.025011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.025055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.025064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.025081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.025092 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.038787 4813 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.041478 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb/status\": read tcp 38.102.83.47:50754->38.102.83.47:6443: use of closed network connection" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.078946 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.119460 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.127676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.128003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.128067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.128194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.128260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.157544 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.202638 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.207649 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:16:57.563678024 +0000 UTC Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.230779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.230981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.231044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.231103 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.231176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.241387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:38 crc kubenswrapper[4813]: E0129 16:29:38.241616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.243048 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:38 crc kubenswrapper[4813]: E0129 16:29:38.243182 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.243289 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:38 crc kubenswrapper[4813]: E0129 16:29:38.243362 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.262449 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.281419 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.318760 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.333397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.333436 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.333446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.333463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.333475 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.360607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.407904 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.436833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.436878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.436888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.436904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.436913 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.438662 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.485721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.494003 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.519617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.539184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.539213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.539222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.539236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.539245 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.560640 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.601102 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.639620 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.641375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.641430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.641450 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.641474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.641487 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.679703 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.718572 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.743973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.744009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.744020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.744035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.744047 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.759807 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.805449 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.846206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.846252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.846264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.846285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.846300 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.948634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.948671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.948681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.948698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:38 crc kubenswrapper[4813]: I0129 16:29:38.948736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:38Z","lastTransitionTime":"2026-01-29T16:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.051302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.051609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.051823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.052045 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.052262 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.155099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.155950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.156061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.156183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.156296 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.208984 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:20:08.62420755 +0000 UTC Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.259381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.259412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.259421 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.259438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.259447 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.361651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.361691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.361701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.362090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.362145 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.465105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.465417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.465425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.465439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.465448 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.498389 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/0.log" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.501066 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f" exitCode=1 Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.501154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.501930 4813 scope.go:117] "RemoveContainer" containerID="94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.521580 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.536340 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.549924 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.567868 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.568167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.568199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.568212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.568229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.568240 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.582648 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.597329 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.612291 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.635979 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:39Z\\\",\\\"message\\\":\\\" removal\\\\nI0129 16:29:39.421975 6164 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:29:39.421979 6164 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:29:39.422011 6164 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:29:39.422028 6164 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:29:39.422065 6164 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:29:39.422071 6164 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:29:39.422077 6164 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:29:39.422083 6164 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:29:39.422089 6164 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:29:39.422095 6164 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:29:39.422101 6164 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:29:39.422455 6164 factory.go:656] Stopping watch factory\\\\nI0129 16:29:39.422518 6164 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:29:39.422626 6164 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.653607 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.667070 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.671034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.671065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.671076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.671096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.671131 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.685079 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.700559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.715207 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.729449 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.749634 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.777703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.777788 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.777801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.777836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.777859 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.879857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.879904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.879914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.879933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.879945 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.982684 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.982925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.982997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.983090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:39 crc kubenswrapper[4813]: I0129 16:29:39.983189 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:39Z","lastTransitionTime":"2026-01-29T16:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.085563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.085607 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.085616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.085632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.085652 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.188342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.188402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.188412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.188429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.188456 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.209871 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:59:27.751780708 +0000 UTC Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.239234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.239271 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.239353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:40 crc kubenswrapper[4813]: E0129 16:29:40.239402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:40 crc kubenswrapper[4813]: E0129 16:29:40.239583 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:40 crc kubenswrapper[4813]: E0129 16:29:40.239734 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.290383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.290419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.290431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.290448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.290461 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.316853 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.393836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.393890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.393909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.393928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.393941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.496578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.496643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.496662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.496688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.496706 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.505091 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/0.log" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.507145 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.507550 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.524946 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.541554 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.555351 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.567931 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.581003 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.599824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.599897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.599921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.599964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.599990 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.604833 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.613592 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2"] Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.614390 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.616936 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.618017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.620383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.634527 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.648174 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.664510 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.677299 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.689680 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702644 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.702673 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.730883 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:39Z\\\",\\\"message\\\":\\\" removal\\\\nI0129 16:29:39.421975 6164 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:29:39.421979 6164 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:29:39.422011 6164 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:29:39.422028 6164 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:29:39.422065 6164 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:29:39.422071 6164 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:29:39.422077 6164 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:29:39.422083 6164 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:29:39.422089 6164 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:29:39.422095 6164 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:29:39.422101 6164 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:29:39.422455 6164 factory.go:656] Stopping watch factory\\\\nI0129 16:29:39.422518 6164 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:29:39.422626 6164 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.746424 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.768952 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:39Z\\\",\\\"message\\\":\\\" removal\\\\nI0129 16:29:39.421975 6164 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:29:39.421979 6164 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:29:39.422011 6164 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:29:39.422028 6164 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:29:39.422065 6164 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:29:39.422071 6164 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:29:39.422077 6164 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:29:39.422083 6164 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:29:39.422089 6164 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:29:39.422095 6164 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:29:39.422101 6164 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:29:39.422455 6164 factory.go:656] Stopping watch factory\\\\nI0129 16:29:39.422518 6164 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:29:39.422626 6164 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.772487 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.772538 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df4hv\" (UniqueName: \"kubernetes.io/projected/b2987565-d79b-47ee-850d-774214a23f77-kube-api-access-df4hv\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.772563 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2987565-d79b-47ee-850d-774214a23f77-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.772661 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.782094 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.799526 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.804928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.804967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.804979 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.804997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.805010 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.814693 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.827662 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.843739 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.861089 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.873218 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.873277 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df4hv\" (UniqueName: \"kubernetes.io/projected/b2987565-d79b-47ee-850d-774214a23f77-kube-api-access-df4hv\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.873302 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2987565-d79b-47ee-850d-774214a23f77-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.873356 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.874047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.874300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b2987565-d79b-47ee-850d-774214a23f77-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.882087 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.882539 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b2987565-d79b-47ee-850d-774214a23f77-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.895336 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.895713 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df4hv\" (UniqueName: \"kubernetes.io/projected/b2987565-d79b-47ee-850d-774214a23f77-kube-api-access-df4hv\") pod \"ovnkube-control-plane-749d76644c-4dpk2\" (UID: \"b2987565-d79b-47ee-850d-774214a23f77\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.908247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.908296 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.908309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.908327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.908340 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:40Z","lastTransitionTime":"2026-01-29T16:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.913957 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.927937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.937441 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: W0129 16:29:40.946570 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2987565_d79b_47ee_850d_774214a23f77.slice/crio-04df9ba9dda38977d30515e9ce3aeabacee769e65ee0f8da4d0fcad5bf892199 WatchSource:0}: Error finding container 04df9ba9dda38977d30515e9ce3aeabacee769e65ee0f8da4d0fcad5bf892199: Status 404 returned error can't find the container with id 04df9ba9dda38977d30515e9ce3aeabacee769e65ee0f8da4d0fcad5bf892199 Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.954894 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.968212 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:40 crc kubenswrapper[4813]: I0129 16:29:40.985222 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:40.999925 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.010932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.010983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.011000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.011017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.011028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.011761 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.114091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.114152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.114163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.114182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.114212 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.210978 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:43:57.784428884 +0000 UTC Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.216884 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.216917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.216928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.216945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.216958 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.319537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.319611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.319628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.319654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.319675 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.422571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.422603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.422611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.422625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.422634 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.512222 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/1.log" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.513411 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/0.log" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.524509 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a" exitCode=1 Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.524585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.524633 4813 scope.go:117] "RemoveContainer" containerID="94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525400 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.525530 4813 scope.go:117] "RemoveContainer" containerID="2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a" Jan 29 16:29:41 crc kubenswrapper[4813]: E0129 16:29:41.525968 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.526503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" event={"ID":"b2987565-d79b-47ee-850d-774214a23f77","Type":"ContainerStarted","Data":"60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.526538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" event={"ID":"b2987565-d79b-47ee-850d-774214a23f77","Type":"ContainerStarted","Data":"04df9ba9dda38977d30515e9ce3aeabacee769e65ee0f8da4d0fcad5bf892199"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.539224 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.554459 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.570054 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.581683 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.594827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.616011 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94100fa5fd0e70c0bb5482fcebb2efee4a4969b5e0802750068b858d254d789f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:39Z\\\",\\\"message\\\":\\\" removal\\\\nI0129 16:29:39.421975 6164 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 16:29:39.421979 6164 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 16:29:39.422011 6164 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 16:29:39.422028 6164 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 16:29:39.422065 6164 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 16:29:39.422071 6164 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 16:29:39.422077 6164 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0129 16:29:39.422083 6164 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0129 16:29:39.422089 6164 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 16:29:39.422095 6164 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 16:29:39.422101 6164 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 16:29:39.422455 6164 factory.go:656] Stopping watch factory\\\\nI0129 16:29:39.422518 6164 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 16:29:39.422626 6164 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.627654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.627727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.627740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.627765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.627779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.628848 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.640732 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.653978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.670257 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.693176 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.709320 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.720481 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.730523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.730755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.730832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.730921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.731043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.732260 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.743967 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.755037 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.834346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.834400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.834412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.834430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.834443 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.936218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.936248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.936258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.936276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:41 crc kubenswrapper[4813]: I0129 16:29:41.936288 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:41Z","lastTransitionTime":"2026-01-29T16:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.039219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.039283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.039299 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.039337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.039369 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.142506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.142573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.142590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.142616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.142633 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.211177 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:16:36.168575722 +0000 UTC Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.238726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.238753 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.238881 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:42 crc kubenswrapper[4813]: E0129 16:29:42.239433 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:42 crc kubenswrapper[4813]: E0129 16:29:42.239534 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:42 crc kubenswrapper[4813]: E0129 16:29:42.239695 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.245929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.245959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.245971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.245990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.246002 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.348711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.348765 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.348782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.348800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.348809 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.451399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.451431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.451439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.451452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.451463 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.530928 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/1.log" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.534145 4813 scope.go:117] "RemoveContainer" containerID="2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a" Jan 29 16:29:42 crc kubenswrapper[4813]: E0129 16:29:42.534325 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.535246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" event={"ID":"b2987565-d79b-47ee-850d-774214a23f77","Type":"ContainerStarted","Data":"cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.546869 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.553358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.553399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.553409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.553426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.553435 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.559997 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.570612 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.585326 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.603464 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.615377 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.625378 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.637554 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.652033 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.655564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.655613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.655627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.655649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.655663 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.662752 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.680273 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.696623 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.711399 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.725494 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.738019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.749388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.758201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.758231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.758240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.758252 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.758260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.763922 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.776920 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.788151 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.801838 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.818092 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.829090 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.840308 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.852993 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.860406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.860434 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.860443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.860456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.860465 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.870165 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.900023 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.928417 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.940130 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.951637 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.962818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.962842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.962850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.962865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.962873 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:42Z","lastTransitionTime":"2026-01-29T16:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.964297 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.977631 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:42 crc kubenswrapper[4813]: I0129 16:29:42.987849 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:42Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.065739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.065784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.065796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.065815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.065830 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.168010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.168071 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.168080 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.168096 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.168126 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.181601 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nsttk"] Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.182059 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.182146 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.210532 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.211503 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:17:54.02179501 +0000 UTC Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.225005 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.236557 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.249345 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.261651 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.270693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.270730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.270744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.270760 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.270772 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.274388 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.285866 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.298439 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.299073 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfnj4\" (UniqueName: \"kubernetes.io/projected/e35b844b-1645-458c-b117-f60fe6042abe-kube-api-access-xfnj4\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.299158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.312512 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.333713 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.344968 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.358216 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.371050 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.373155 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.373267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.373340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.373427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.373510 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.383287 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.396246 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.399822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.399915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfnj4\" (UniqueName: \"kubernetes.io/projected/e35b844b-1645-458c-b117-f60fe6042abe-kube-api-access-xfnj4\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.400227 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.400313 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:29:43.900293 +0000 UTC m=+36.387496216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.407352 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.418409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfnj4\" (UniqueName: \"kubernetes.io/projected/e35b844b-1645-458c-b117-f60fe6042abe-kube-api-access-xfnj4\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.424339 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:43Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.476220 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.476247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.476256 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.476271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.476280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.578037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.578083 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.578097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.578134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.578153 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.680895 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.680945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.680956 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.680974 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.680988 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.783151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.783182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.783191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.783204 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.783213 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.885267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.885309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.885319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.885337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.885349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.905880 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.906042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.906160 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:29:59.906097982 +0000 UTC m=+52.393301198 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.906406 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:43 crc kubenswrapper[4813]: E0129 16:29:43.906473 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:29:44.906461932 +0000 UTC m=+37.393665148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.988390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.988433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.988445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.988462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:43 crc kubenswrapper[4813]: I0129 16:29:43.988473 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:43Z","lastTransitionTime":"2026-01-29T16:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.006924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.006964 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.006984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.007006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007048 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007093 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007137 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:00.007094099 +0000 UTC m=+52.494297325 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007140 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007162 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007207 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:00.007194922 +0000 UTC m=+52.494398138 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007220 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007244 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007284 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007304 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007328 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:00.007306365 +0000 UTC m=+52.494509581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.007374 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:00.007353917 +0000 UTC m=+52.494557133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.091058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.091118 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.091128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.091143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.091152 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.141559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.141614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.141628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.141647 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.141670 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.153830 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.158237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.158277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.158289 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.158306 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.158316 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.171786 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.175519 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.175549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.175569 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.175587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.175599 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.187362 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.191017 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.191048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.191060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.191076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.191087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.203455 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.208430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.208475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.208488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.208507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.208520 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.212009 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:54:00.55141429 +0000 UTC Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.221181 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:44Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.221336 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.223086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.223163 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.223176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.223199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.223213 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.239355 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.239389 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.239365 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.239491 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.239616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.239665 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.326057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.326124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.326136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.326152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.326161 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.429160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.429245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.429258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.429278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.429290 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.532131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.532187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.532203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.532224 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.532237 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.635230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.635285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.635298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.635313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.635324 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.737962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.738005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.738014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.738030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.738040 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.840645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.840702 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.840711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.840727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.840736 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.915615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.915817 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: E0129 16:29:44.915908 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:29:46.915887201 +0000 UTC m=+39.403090417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.943573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.943617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.943625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.943643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:44 crc kubenswrapper[4813]: I0129 16:29:44.943653 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:44Z","lastTransitionTime":"2026-01-29T16:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.046284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.046340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.046356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.046379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.046394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.149245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.149292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.149309 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.149331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.149348 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.212984 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:32:11.783631961 +0000 UTC Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.239627 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:45 crc kubenswrapper[4813]: E0129 16:29:45.239816 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.251172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.251212 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.251222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.251237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.251248 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.353484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.353531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.353542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.353560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.353570 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.456397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.456458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.456475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.456501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.456515 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.558816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.558877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.558888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.558906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.558917 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.660913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.660985 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.661012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.661040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.661063 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.763606 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.763649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.763660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.763676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.763687 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.871416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.871456 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.871465 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.871481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.871490 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.975035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.975089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.975100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.975138 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:45 crc kubenswrapper[4813]: I0129 16:29:45.975155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:45Z","lastTransitionTime":"2026-01-29T16:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.078269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.078308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.078318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.078335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.078348 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.181448 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.181493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.181501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.181517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.181526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.213713 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:25:05.798756834 +0000 UTC Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.239387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.239458 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.239544 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:46 crc kubenswrapper[4813]: E0129 16:29:46.239659 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:46 crc kubenswrapper[4813]: E0129 16:29:46.239763 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:46 crc kubenswrapper[4813]: E0129 16:29:46.239837 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.284463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.284507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.284516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.284533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.284544 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.386600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.386855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.386924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.387029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.387097 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.490091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.490378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.490438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.490497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.490558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.592764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.592995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.593102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.593213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.593291 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.696223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.696274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.696290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.696316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.696342 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.802203 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.802507 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.802593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.802678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.802774 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.906097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.906200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.906218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.906244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.906264 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:46Z","lastTransitionTime":"2026-01-29T16:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:46 crc kubenswrapper[4813]: I0129 16:29:46.938465 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:46 crc kubenswrapper[4813]: E0129 16:29:46.938604 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:46 crc kubenswrapper[4813]: E0129 16:29:46.938662 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:29:50.938647855 +0000 UTC m=+43.425851071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.009066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.009099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.009127 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.009153 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.009163 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.112075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.112122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.112132 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.112146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.112156 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.213823 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:39:14.719656177 +0000 UTC Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.214962 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.214998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.215009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.215027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.215038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.239262 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:47 crc kubenswrapper[4813]: E0129 16:29:47.239433 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.320880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.320923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.320935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.320952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.320963 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.423350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.423394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.423409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.423428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.423441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.526815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.526855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.526867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.526886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.526898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.630057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.630347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.630389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.630425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.630446 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.732967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.733020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.733029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.733044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.733055 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.835563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.835696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.835705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.835722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.835732 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.938308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.938352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.938360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.938377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:47 crc kubenswrapper[4813]: I0129 16:29:47.938388 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:47Z","lastTransitionTime":"2026-01-29T16:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.040603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.040641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.040650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.040665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.040674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.143735 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.143816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.143833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.143857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.143873 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.214655 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:32:10.885861981 +0000 UTC Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.239178 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:48 crc kubenswrapper[4813]: E0129 16:29:48.239422 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.239178 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.239507 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:48 crc kubenswrapper[4813]: E0129 16:29:48.239891 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:48 crc kubenswrapper[4813]: E0129 16:29:48.239690 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.245482 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.245512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.245520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.245532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.245542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.254541 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.266701 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.280902 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.299473 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.313519 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.326924 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.339271 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.347361 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.352267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.352311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.352338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.352359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.356945 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.371816 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.383416 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.393795 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.403530 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.420126 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.439056 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.451320 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.455408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.455451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.455466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.455484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.455497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.465547 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.477818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.556948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.556984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.556996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.557014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.557026 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.659621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.659664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.659673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.659692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.659703 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.762197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.762236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.762247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.762263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.762274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.864655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.864697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.864709 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.864728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.864739 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.967780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.967841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.967859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.967880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:48 crc kubenswrapper[4813]: I0129 16:29:48.967896 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:48Z","lastTransitionTime":"2026-01-29T16:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.070764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.070810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.070819 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.070836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.070848 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.173550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.173586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.173594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.173612 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.173621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.215273 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:59:50.363529482 +0000 UTC Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.238653 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:49 crc kubenswrapper[4813]: E0129 16:29:49.239143 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.239263 4813 scope.go:117] "RemoveContainer" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.276990 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.277036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.277049 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.277072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.277085 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.379027 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.379060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.379070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.379086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.379098 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.481445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.481780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.481792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.481809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.481821 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.565409 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.567511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.567825 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.583955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.584031 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.584042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.584058 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.584071 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.598360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.613010 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.624720 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.637979 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.652826 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.668163 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.679959 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.686918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.686970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.686986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.687003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.687014 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.695452 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.709544 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.727861 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.738998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.753574 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.767428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.783759 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.789786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.789837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.789849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.789869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.789881 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.799559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.811210 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.826145 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:49Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.892847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.892899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.892910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.892930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.892942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.995579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.995645 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.995664 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.995692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:49 crc kubenswrapper[4813]: I0129 16:29:49.995710 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:49Z","lastTransitionTime":"2026-01-29T16:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.098939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.098986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.098999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.099015 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.099028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.201401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.201441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.201452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.201468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.201477 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.216218 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 07:02:58.321675036 +0000 UTC Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.239764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.239936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:50 crc kubenswrapper[4813]: E0129 16:29:50.239972 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.240101 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:50 crc kubenswrapper[4813]: E0129 16:29:50.240262 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:50 crc kubenswrapper[4813]: E0129 16:29:50.240409 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.304195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.304246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.304254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.304271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.304281 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.406977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.407020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.407028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.407044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.407055 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.510089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.510159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.510181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.510202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.510215 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.612905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.613397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.613554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.613651 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.613719 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.717039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.717084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.717097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.717131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.717144 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.819964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.820009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.820021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.820040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.820051 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.922464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.922531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.922552 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.922575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.922589 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:50Z","lastTransitionTime":"2026-01-29T16:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:50 crc kubenswrapper[4813]: I0129 16:29:50.981054 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:50 crc kubenswrapper[4813]: E0129 16:29:50.981230 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:50 crc kubenswrapper[4813]: E0129 16:29:50.981322 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:29:58.981295837 +0000 UTC m=+51.468499063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.025441 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.025515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.025542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.025567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.025582 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.128784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.128835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.128849 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.128889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.128903 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.216838 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:33:25.345412415 +0000 UTC Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.232510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.232564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.232574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.232595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.232608 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.239105 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:51 crc kubenswrapper[4813]: E0129 16:29:51.239248 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.336175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.336217 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.336229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.336247 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.336260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.439225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.439315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.439336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.439365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.439393 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.542307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.542352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.542362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.542383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.542394 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.645226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.645509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.645571 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.645632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.645684 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.748909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.748971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.748983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.749012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.749025 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.851264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.851312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.851328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.851347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.851359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.954187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.954235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.954248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.954266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:51 crc kubenswrapper[4813]: I0129 16:29:51.954280 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:51Z","lastTransitionTime":"2026-01-29T16:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.057617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.057663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.057674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.057692 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.057705 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.160495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.160534 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.160544 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.160560 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.160571 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.217962 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:44:55.8974928 +0000 UTC Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.239376 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.239452 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.239568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:52 crc kubenswrapper[4813]: E0129 16:29:52.239560 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:52 crc kubenswrapper[4813]: E0129 16:29:52.239658 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:52 crc kubenswrapper[4813]: E0129 16:29:52.239730 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.263704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.263770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.263784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.263818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.263831 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.367585 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.367656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.367673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.367700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.367714 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.470520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.470554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.470565 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.470583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.470594 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.574284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.574334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.574347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.574365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.574377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.676626 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.676658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.676669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.676687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.676698 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.779336 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.779397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.779409 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.779427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.779443 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.881827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.881862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.881874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.881890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.881900 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.984805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.984844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.984867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.984889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:52 crc kubenswrapper[4813]: I0129 16:29:52.984899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:52Z","lastTransitionTime":"2026-01-29T16:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.086891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.086948 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.086965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.086989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.087006 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.189269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.189305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.189314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.189331 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.189341 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.218233 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:42:24.912581916 +0000 UTC Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.239710 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:53 crc kubenswrapper[4813]: E0129 16:29:53.239974 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.292490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.292539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.292551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.292570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.292583 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.395557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.395610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.395622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.395649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.395660 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.498494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.498536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.498547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.498568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.498581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.600792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.600832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.600841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.600856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.600892 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.704062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.704378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.704459 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.704540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.704625 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.806918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.807368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.807533 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.807672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.807838 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.910543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.911081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.911175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.911249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:53 crc kubenswrapper[4813]: I0129 16:29:53.911363 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:53Z","lastTransitionTime":"2026-01-29T16:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.013838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.014101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.014191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.014259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.014317 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.117191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.117218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.117226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.117238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.117264 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.218348 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:37:00.028456646 +0000 UTC Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.219719 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.219770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.219782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.219800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.219810 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.239155 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.239210 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.239250 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.239291 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.239395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.239487 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.322611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.322690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.322704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.322730 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.322747 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.423350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.423396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.423405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.423420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.423429 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.435295 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.439887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.439927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.439937 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.439952 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.439962 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.452013 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.456053 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.456106 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.456165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.456197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.456218 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.492762 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.501969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.502030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.502040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.502061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.502073 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.526883 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.531196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.531251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.531269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.531292 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.531305 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.545761 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:54Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:54 crc kubenswrapper[4813]: E0129 16:29:54.545997 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.548328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.548369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.548381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.548404 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.548416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.651609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.651660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.651672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.651690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.651703 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.754066 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.754368 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.754474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.754566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.754628 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.857273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.857319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.857328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.857345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.857355 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.960272 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.960344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.960356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.960373 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:54 crc kubenswrapper[4813]: I0129 16:29:54.960384 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:54Z","lastTransitionTime":"2026-01-29T16:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.063705 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.063745 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.063754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.063770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.063782 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.166187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.166222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.166232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.166248 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.166259 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.218441 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:13:20.466022903 +0000 UTC Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.238979 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:55 crc kubenswrapper[4813]: E0129 16:29:55.239141 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.241764 4813 scope.go:117] "RemoveContainer" containerID="2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.268822 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.268880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.268902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.268930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.268954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.371587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.371659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.371685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.371717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.371741 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.475130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.475167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.475177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.475194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.475205 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.577941 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.577976 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.577988 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.578005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.578017 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.586721 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/1.log" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.588573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.589730 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.606229 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.626259 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.640176 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.652008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.669015 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680276 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680669 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.680731 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.692765 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.702631 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.718701 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.731222 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.743623 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.757206 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.770081 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.783061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.783101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.783131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.783146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.783158 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.792998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.806297 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.817420 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.829610 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.830467 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.838711 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.846053 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.860059 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.873957 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.886325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.886380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.886393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.886410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.886424 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.898070 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.912148 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.927028 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.942752 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.956967 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.978656 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.991910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.991968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.991992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.992013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.992025 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:55Z","lastTransitionTime":"2026-01-29T16:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:55 crc kubenswrapper[4813]: I0129 16:29:55.996192 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:55Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.008789 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.033256 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.046697 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.059915 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.074177 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.088142 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.093971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.094000 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.094009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.094023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.094032 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.099849 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.196340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.196413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.196426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.196442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.196454 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.218535 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:37:42.506604694 +0000 UTC Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.239385 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:56 crc kubenswrapper[4813]: E0129 16:29:56.239827 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.239883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.239671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:56 crc kubenswrapper[4813]: E0129 16:29:56.240125 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:56 crc kubenswrapper[4813]: E0129 16:29:56.240271 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.298840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.298887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.298902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.298920 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.298932 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.401924 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.401967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.401978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.401995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.402007 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.503995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.504036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.504046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.504064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.504075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.593991 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/2.log" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.594826 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/1.log" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.597820 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" exitCode=1 Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.597863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.597936 4813 scope.go:117] "RemoveContainer" containerID="2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.598796 4813 scope.go:117] "RemoveContainer" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" Jan 29 16:29:56 crc kubenswrapper[4813]: E0129 16:29:56.599039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.607549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.607599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.607611 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.607633 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.607648 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.615174 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.627719 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.642237 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.657224 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.672072 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.692545 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc01e9c3e16b4061990007018b5dd1e0b7ba89388b01d350f3aea23d32d6e4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"message\\\":\\\"29T16:29:40Z is after 2025-08-24T17:21:41Z]\\\\nI0129 16:29:40.236516 6291 services_controller.go:434] Service openshift-ingress/router-internal-default retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{router-internal-default openshift-ingress ed650b1e-939d-4166-88db-ddadc6d5accd 5426 0 2025-02-23 05:23:23 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[service.alpha.openshift.io/serving-cert-secret-name:router-metrics-certs-default service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{apps/v1 Deployment router-default bb5b8eff-af7d-47c8-85cd-465e0b29b7d1 0xc0006e56ee \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalNa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.706798 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.710944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.710984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.710996 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.711018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.711029 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.725816 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.741559 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.757428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.771554 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.786025 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.799512 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.810725 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.813855 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.814372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.814392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.814449 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.814464 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.833604 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.847818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.860894 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.876531 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.917515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.917573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.917588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.917608 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:56 crc kubenswrapper[4813]: I0129 16:29:56.917623 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:56Z","lastTransitionTime":"2026-01-29T16:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.020469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.020520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.020532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.020551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.020568 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.123880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.123930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.123942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.123959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.123972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.218953 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:16:12.563192728 +0000 UTC Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.228239 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.228303 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.228319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.228345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.228360 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.239619 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:57 crc kubenswrapper[4813]: E0129 16:29:57.239804 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.332568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.332646 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.332670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.332704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.332729 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.434951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.435018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.435028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.435042 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.435052 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.539025 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.539088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.539100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.539137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.539152 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.604527 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/2.log" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.608241 4813 scope.go:117] "RemoveContainer" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" Jan 29 16:29:57 crc kubenswrapper[4813]: E0129 16:29:57.608480 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.623952 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.636765 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.642082 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.642312 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.642398 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.642501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.642609 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.648196 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.662884 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.676128 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.700874 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.715323 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.729980 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745149 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.745195 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.757385 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.770360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.784319 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.796330 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.810966 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.832160 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.847303 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.848136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.848186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.848202 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.848226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.848242 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.859547 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.868905 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:57Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.950926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.950966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.950978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.950998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:57 crc kubenswrapper[4813]: I0129 16:29:57.951010 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:57Z","lastTransitionTime":"2026-01-29T16:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.053543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.053587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.053599 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.053617 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.053628 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.156243 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.156277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.156285 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.156300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.156310 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.219774 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:21:48.58793219 +0000 UTC Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.239340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.239416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.239356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:29:58 crc kubenswrapper[4813]: E0129 16:29:58.239495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:29:58 crc kubenswrapper[4813]: E0129 16:29:58.239571 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:29:58 crc kubenswrapper[4813]: E0129 16:29:58.239622 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.258430 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.258501 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.258532 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.258558 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.259045 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.264457 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.276275 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.288264 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.308920 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.321721 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.332818 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.348277 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362363 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.362212 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.373621 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.384970 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.395617 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.406402 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.423175 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.433488 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.445418 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.458854 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.464375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.464406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.464417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.464439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.464449 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.474768 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.487173 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.566795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.566824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.566832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.566845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.566855 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.669088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.669148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.669165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.669187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.669199 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.771429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.771478 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.771490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.771510 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.771524 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.874850 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.874891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.874901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.874917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.874927 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.977575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.977824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.977887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.977964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:58 crc kubenswrapper[4813]: I0129 16:29:58.978051 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:58Z","lastTransitionTime":"2026-01-29T16:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.065325 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:59 crc kubenswrapper[4813]: E0129 16:29:59.065538 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:59 crc kubenswrapper[4813]: E0129 16:29:59.066072 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:30:15.066054606 +0000 UTC m=+67.553257822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.080624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.080675 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.080689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.080707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.080719 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.184100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.184170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.184182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.184199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.184210 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.220707 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:37:52.27490882 +0000 UTC Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.239211 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:29:59 crc kubenswrapper[4813]: E0129 16:29:59.239379 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.291266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.291900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.292022 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.292056 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.292080 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.396126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.396164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.396175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.396193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.396202 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.499317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.499387 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.499407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.499977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.500077 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.603431 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.603472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.603483 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.603500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.603513 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.707136 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.707180 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.707192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.707210 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.707221 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.810672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.810728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.810741 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.810763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.810778 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.914185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.914229 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.914240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.914262 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.914275 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:29:59Z","lastTransitionTime":"2026-01-29T16:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:29:59 crc kubenswrapper[4813]: I0129 16:29:59.975452 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:29:59 crc kubenswrapper[4813]: E0129 16:29:59.975771 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:30:31.975729954 +0000 UTC m=+84.462933170 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.016502 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.016542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.016550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.016566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.016576 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.076433 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.076527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.076553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.076576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076735 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076810 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076855 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076859 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:32.076829755 +0000 UTC m=+84.564033141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076746 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076905 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076921 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076956 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:32.076947578 +0000 UTC m=+84.564150804 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076868 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076994 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:32.07698692 +0000 UTC m=+84.564190356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.076776 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.077034 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:30:32.077024871 +0000 UTC m=+84.564228087 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.118796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.118970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.118987 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.119009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.119023 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.220957 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:13:14.17143048 +0000 UTC Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.222273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.222317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.222328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.222345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.222359 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.239723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.239852 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.239944 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.239861 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.240050 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:00 crc kubenswrapper[4813]: E0129 16:30:00.240231 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.325499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.325557 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.325567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.325586 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.325597 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.428481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.428530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.428547 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.428567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.428581 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.532176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.532276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.532288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.532307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.532325 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.635917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.636278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.636290 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.636311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.636323 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.739302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.739344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.739353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.739369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.739379 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.841847 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.841930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.841942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.841967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.841982 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.944253 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.944295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.944307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.944324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:00 crc kubenswrapper[4813]: I0129 16:30:00.944336 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:00Z","lastTransitionTime":"2026-01-29T16:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.046160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.046189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.046199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.046213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.046222 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.148992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.149034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.149044 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.149061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.149073 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.221928 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:29:23.924701398 +0000 UTC Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.239387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:01 crc kubenswrapper[4813]: E0129 16:30:01.239588 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.252154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.252191 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.252200 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.252213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.252223 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.354659 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.354737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.354749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.354770 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.354781 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.457391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.457453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.457462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.457475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.457484 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.559550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.559583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.559593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.559610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.559622 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.586828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.610726 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.624426 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.636465 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.648269 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.660601 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.661961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.661995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.662005 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.662020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.662029 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.676579 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.688760 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.703978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.717569 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.736456 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.750687 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.765308 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.765357 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.765367 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.765385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.765398 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.768164 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.781801 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.797557 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.813048 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.829408 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.843537 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.862002 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:01Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.868554 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.868711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.868777 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.868800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.868816 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.972567 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.972605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.972616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.972635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:01 crc kubenswrapper[4813]: I0129 16:30:01.972646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:01Z","lastTransitionTime":"2026-01-29T16:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.075187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.075238 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.075249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.075269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.075282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.178141 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.178181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.178190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.178205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.178214 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.222998 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:49:27.597955214 +0000 UTC Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.239575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.239575 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:02 crc kubenswrapper[4813]: E0129 16:30:02.239738 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.239752 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:02 crc kubenswrapper[4813]: E0129 16:30:02.239803 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:02 crc kubenswrapper[4813]: E0129 16:30:02.239868 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.280685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.280726 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.280737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.280799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.280814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.383677 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.383733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.383749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.383766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.383782 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.487587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.487636 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.487653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.487674 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.487687 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.590162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.590196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.590207 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.590219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.590228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.692464 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.692743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.692815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.692890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.692948 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.795673 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.795897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.795909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.795927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.795942 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.899171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.899230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.899246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.899267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:02 crc kubenswrapper[4813]: I0129 16:30:02.899282 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:02Z","lastTransitionTime":"2026-01-29T16:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.001658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.001694 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.001703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.001716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.001726 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.103817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.103851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.103859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.103874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.103884 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.206891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.206938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.206947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.206963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.206972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.223261 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:14:01.2180648 +0000 UTC Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.239638 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:03 crc kubenswrapper[4813]: E0129 16:30:03.239772 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.309549 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.309605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.309624 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.309650 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.309663 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.412227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.412269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.412281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.412297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.412309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.514559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.514601 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.514613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.514628 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.514640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.617177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.617228 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.617240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.617259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.617272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.720322 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.720356 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.720364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.720378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.720389 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.823029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.823061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.823072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.823087 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.823096 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.926274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.926319 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.926330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.926345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:03 crc kubenswrapper[4813]: I0129 16:30:03.926355 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:03Z","lastTransitionTime":"2026-01-29T16:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.028281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.028317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.028330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.028346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.028358 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.131402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.131446 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.131458 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.131476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.131488 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.223567 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:33:55.341477167 +0000 UTC Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.233754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.233800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.233814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.233832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.233844 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.239372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.239427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.239536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.239684 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.239949 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.240074 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.336900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.336959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.336973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.336995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.337007 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.439732 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.439782 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.439793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.439810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.439821 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.542380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.542428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.542437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.542453 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.542462 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.552551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.552588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.552603 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.552621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.552631 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.565163 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.569325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.569362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.569371 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.569389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.569399 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.581387 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.584993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.585030 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.585041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.585059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.585070 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.597422 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.601575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.601605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.601614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.601630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.601642 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.613764 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.617538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.617714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.618057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.618198 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.618481 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.630315 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:04Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:04 crc kubenswrapper[4813]: E0129 16:30:04.630432 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.644966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.645225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.645344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.645444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.645539 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.747734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.747766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.747775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.747789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.747798 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.851078 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.851142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.851152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.851167 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.851176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.952992 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.953067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.953091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.953148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:04 crc kubenswrapper[4813]: I0129 16:30:04.953172 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:04Z","lastTransitionTime":"2026-01-29T16:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.056403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.056462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.056485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.056509 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.056525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.158731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.159001 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.159063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.159209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.159286 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.224322 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:39:57.108629349 +0000 UTC Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.239015 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:05 crc kubenswrapper[4813]: E0129 16:30:05.239203 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.261843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.261887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.261903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.261925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.261941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.364462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.364516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.364528 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.364546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.364558 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.466520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.466564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.466575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.466591 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.466603 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.569538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.569590 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.569600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.569619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.569630 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.672270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.672311 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.672323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.672339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.672350 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.774885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.774928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.774938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.774954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.774968 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.876970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.877028 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.877040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.877059 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.877075 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.979543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.979583 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.979592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.979610 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:05 crc kubenswrapper[4813]: I0129 16:30:05.979621 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:05Z","lastTransitionTime":"2026-01-29T16:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.081838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.081897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.081910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.081928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.081941 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.184481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.184551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.184566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.184582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.184593 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.224954 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:31:37.535506897 +0000 UTC Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.238955 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.239009 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:06 crc kubenswrapper[4813]: E0129 16:30:06.239092 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.238965 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:06 crc kubenswrapper[4813]: E0129 16:30:06.239226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:06 crc kubenswrapper[4813]: E0129 16:30:06.239287 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.286795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.286838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.286846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.286861 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.286869 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.388883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.388935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.388945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.388961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.388971 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.491971 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.492013 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.492023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.492039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.492051 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.595346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.595397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.595411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.595433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.595449 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.697771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.697813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.697823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.697838 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.697848 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.799946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.799980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.799989 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.800004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.800013 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.902889 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.902955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.902967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.902991 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:06 crc kubenswrapper[4813]: I0129 16:30:06.903008 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:06Z","lastTransitionTime":"2026-01-29T16:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.006170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.006237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.006249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.006270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.006284 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.109440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.109490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.109499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.109514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.109525 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.212961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.213075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.213097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.213126 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.213137 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.225840 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:44:52.473405896 +0000 UTC Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.239348 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:07 crc kubenswrapper[4813]: E0129 16:30:07.239617 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.315910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.315951 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.315966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.315982 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.315992 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.418455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.418514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.418524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.418539 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.418548 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.521232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.521277 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.521297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.521317 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.521361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.623764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.623815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.623827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.623854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.623866 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.726091 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.726152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.726162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.726177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.726187 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.828844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.828883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.828892 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.828906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.828916 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.931579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.931634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.931643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.931658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:07 crc kubenswrapper[4813]: I0129 16:30:07.931667 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:07Z","lastTransitionTime":"2026-01-29T16:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.033871 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.033950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.033964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.034003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.034018 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.136282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.136341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.136355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.136372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.136382 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.226193 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:52:35.058242962 +0000 UTC Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238365 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238419 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238488 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.238975 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:08 crc kubenswrapper[4813]: E0129 16:30:08.239085 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.239646 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.239762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:08 crc kubenswrapper[4813]: E0129 16:30:08.239975 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:08 crc kubenswrapper[4813]: E0129 16:30:08.240183 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.254549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.266360 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.276885 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.291142 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.316490 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.336656 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.341595 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.341640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.341652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.341668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.341680 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.357421 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.369699 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.386318 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.398193 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.410671 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.423827 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444739 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444766 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444777 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.444846 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.455461 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.468566 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.478720 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.489942 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.502230 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.546752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.546791 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.546799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.546816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.546827 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.648809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.648865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.648880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.648898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.648910 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.751727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.751772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.751783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.751801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.751814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.855016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.855067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.855079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.855097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.855126 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.958064 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.958148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.958159 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.958173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:08 crc kubenswrapper[4813]: I0129 16:30:08.958182 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:08Z","lastTransitionTime":"2026-01-29T16:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.061462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.061506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.061515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.061530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.061540 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.163910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.163961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.163978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.163998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.164010 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.226808 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:58:35.590488007 +0000 UTC Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.239294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:09 crc kubenswrapper[4813]: E0129 16:30:09.239492 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.268105 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.268189 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.268201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.268227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.268239 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.372466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.372944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.373079 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.373250 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.373385 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.477086 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.477149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.477158 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.477174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.477184 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.578922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.578970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.578986 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.579003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.579015 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.681486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.681531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.681543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.681559 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.681571 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.784426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.784491 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.784513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.784542 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.784565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.887294 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.887340 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.887350 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.887366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.887377 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.989896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.989945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.989954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.989970 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:09 crc kubenswrapper[4813]: I0129 16:30:09.989981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:09Z","lastTransitionTime":"2026-01-29T16:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.093016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.093060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.093070 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.093084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.093094 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.195644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.195689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.195697 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.195711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.195722 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.227035 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:28:03.911146764 +0000 UTC Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.239581 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.239677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:10 crc kubenswrapper[4813]: E0129 16:30:10.239729 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:10 crc kubenswrapper[4813]: E0129 16:30:10.239848 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.239692 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:10 crc kubenswrapper[4813]: E0129 16:30:10.239947 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.298625 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.298667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.298676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.298691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.298701 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.401805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.401866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.401885 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.401911 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.401944 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.504346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.504406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.504427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.504451 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.504470 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.607836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.607882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.607897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.607919 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.607934 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.710343 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.710379 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.710389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.710402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.710411 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.813225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.813254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.813263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.813276 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.813285 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.915498 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.915531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.915540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.915553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:10 crc kubenswrapper[4813]: I0129 16:30:10.915563 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:10Z","lastTransitionTime":"2026-01-29T16:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.017896 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.017926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.017933 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.017946 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.017955 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.120540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.120570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.120578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.120592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.120601 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.241871 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:08:31.724052911 +0000 UTC Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.242056 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:11 crc kubenswrapper[4813]: E0129 16:30:11.242203 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.242904 4813 scope.go:117] "RemoveContainer" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" Jan 29 16:30:11 crc kubenswrapper[4813]: E0129 16:30:11.243068 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.250152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.250194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.250209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.250231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.250243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.352437 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.352714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.352815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.353157 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.353234 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.455621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.455661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.455671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.455688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.455698 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.558270 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.558307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.558318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.558335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.558346 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.660412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.660471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.660490 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.660515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.660528 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.762786 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.762841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.762853 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.762875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.762887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.864823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.864863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.864876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.864893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.864904 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.967493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.967529 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.967540 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.967556 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:11 crc kubenswrapper[4813]: I0129 16:30:11.967566 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:11Z","lastTransitionTime":"2026-01-29T16:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.070146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.070190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.070199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.070214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.070225 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.173014 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.173060 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.173072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.173094 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.173180 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.239542 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:12 crc kubenswrapper[4813]: E0129 16:30:12.239692 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.239868 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:12 crc kubenswrapper[4813]: E0129 16:30:12.239912 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.240136 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:12 crc kubenswrapper[4813]: E0129 16:30:12.240297 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.242032 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:53:08.964282259 +0000 UTC Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.276267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.276318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.276329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.276347 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.276361 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.379199 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.379244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.379254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.379269 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.379279 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.482160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.482226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.482237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.482258 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.482270 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.584876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.584917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.584929 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.584944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.584954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.687178 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.687227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.687237 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.687255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.687268 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.789046 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.789122 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.789134 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.789150 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.789163 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.891640 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.891704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.891725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.891753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.891771 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.993858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.993908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.993921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.993939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:12 crc kubenswrapper[4813]: I0129 16:30:12.993949 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:12Z","lastTransitionTime":"2026-01-29T16:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.096128 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.096172 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.096181 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.096195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.096205 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.198214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.198476 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.198546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.198619 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.198682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.239198 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:13 crc kubenswrapper[4813]: E0129 16:30:13.239350 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.242169 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:33:20.72927564 +0000 UTC Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.301376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.301405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.301413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.301426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.301438 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.403930 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.403969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.403978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.403995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.404005 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.506700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.506759 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.506768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.506781 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.506791 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.609295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.609330 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.609339 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.609353 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.609363 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.711631 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.711695 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.711706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.711721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.711730 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.813817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.813848 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.813859 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.813874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.813885 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.919036 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.919090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.919102 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.919143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:13 crc kubenswrapper[4813]: I0129 16:30:13.919166 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:13Z","lastTransitionTime":"2026-01-29T16:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.021923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.022039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.022148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.022175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.022187 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.124833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.124882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.124893 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.124908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.124921 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.227425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.227466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.227475 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.227499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.227513 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.238778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.238838 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.238893 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.238935 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.238996 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.239150 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.242449 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:27:40.805434147 +0000 UTC Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.330714 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.330784 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.330802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.330823 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.330837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.434084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.434173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.434187 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.434208 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.434221 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.536835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.536902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.536916 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.536940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.536952 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.639468 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.639515 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.639526 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.639546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.639562 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.741999 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.742069 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.742089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.742124 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.742137 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.845345 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.845392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.845403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.845422 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.845437 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.859051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.859100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.859137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.859154 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.859167 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.871939 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.876378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.876447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.876463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.876486 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.876502 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.891869 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.896209 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.896245 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.896255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.896273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.896284 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.911128 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.914887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.914913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.914923 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.914940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.914951 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.927315 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.931246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.931286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.931295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.931341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.931354 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.944881 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:14Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:14 crc kubenswrapper[4813]: E0129 16:30:14.945137 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.948364 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.948392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.948408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.948426 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:14 crc kubenswrapper[4813]: I0129 16:30:14.948436 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:14Z","lastTransitionTime":"2026-01-29T16:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.051743 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.051771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.051779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.051794 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.051803 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.130260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:15 crc kubenswrapper[4813]: E0129 16:30:15.130410 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:30:15 crc kubenswrapper[4813]: E0129 16:30:15.130463 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:30:47.130446919 +0000 UTC m=+99.617650135 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.154874 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.154939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.154953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.154977 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.154994 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.239275 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:15 crc kubenswrapper[4813]: E0129 16:30:15.239495 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.243217 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:28:06.173047075 +0000 UTC Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.257940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.257981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.258009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.258026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.258036 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.360440 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.360471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.360480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.360494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.360504 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.463568 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.463622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.463635 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.463655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.463668 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.566325 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.566366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.566378 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.566396 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.566410 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.667958 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.668004 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.668016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.668033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.668042 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.771205 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.771267 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.771279 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.771298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.771313 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.873517 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.873574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.873587 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.873605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.873617 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.976648 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.976691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.976700 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.976716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:15 crc kubenswrapper[4813]: I0129 16:30:15.976725 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:15Z","lastTransitionTime":"2026-01-29T16:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.078840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.078880 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.078891 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.078906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.078918 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.181282 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.181318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.181329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.181344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.181356 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.238931 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.238993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.238958 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:16 crc kubenswrapper[4813]: E0129 16:30:16.239089 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:16 crc kubenswrapper[4813]: E0129 16:30:16.239230 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:16 crc kubenswrapper[4813]: E0129 16:30:16.239314 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.243591 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:08:59.364068543 +0000 UTC Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.283627 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.283667 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.283676 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.283691 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.283700 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.386284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.386320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.386329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.386344 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.386353 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.488909 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.488943 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.488953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.488966 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.488975 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.592048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.592101 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.592129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.592147 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.592158 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.694878 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.694915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.694926 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.694940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.694949 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.798084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.798139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.798148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.798165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.798175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.900621 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.900680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.900693 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.900712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:16 crc kubenswrapper[4813]: I0129 16:30:16.900723 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:16Z","lastTransitionTime":"2026-01-29T16:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.003783 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.003841 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.003854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.003875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.003887 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.108690 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.108744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.108752 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.108769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.108778 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.211721 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.211769 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.211780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.211799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.211811 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.239068 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:17 crc kubenswrapper[4813]: E0129 16:30:17.239231 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.244225 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:35:07.919314165 +0000 UTC Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.314678 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.314724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.314734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.314749 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.314758 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.417410 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.417467 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.417481 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.417577 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.417595 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.520767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.520824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.520836 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.520856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.520873 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.623543 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.623632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.623643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.623662 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.623674 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.670739 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/0.log" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.670796 4813 generic.go:334] "Generic (PLEG): container finished" podID="4acefc9f-f68a-4566-a0f5-656b961d4267" containerID="f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324" exitCode=1 Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.670830 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerDied","Data":"f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.671227 4813 scope.go:117] "RemoveContainer" containerID="f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.681656 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.695295 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.708782 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"2026-01-29T16:29:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529\\\\n2026-01-29T16:29:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529 to /host/opt/cni/bin/\\\\n2026-01-29T16:29:32Z [verbose] multus-daemon started\\\\n2026-01-29T16:29:32Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:30:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.720879 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.726061 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.726241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.726418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.726536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.726609 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.736281 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.749302 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.764304 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.785432 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.796259 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.811542 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.823705 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.829002 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.829067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.829131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.829151 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.829165 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.837984 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.851549 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.864330 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.877664 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.890428 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.908609 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.919971 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:17Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.931666 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.931717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.931731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.931748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:17 crc kubenswrapper[4813]: I0129 16:30:17.931779 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:17Z","lastTransitionTime":"2026-01-29T16:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.034866 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.034922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.034932 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.034947 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.034957 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.137810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.137857 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.137869 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.137886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.137898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.238907 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.238949 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:18 crc kubenswrapper[4813]: E0129 16:30:18.239080 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:18 crc kubenswrapper[4813]: E0129 16:30:18.239204 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.239288 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:18 crc kubenswrapper[4813]: E0129 16:30:18.239370 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.240360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.240400 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.240413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.240429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.240441 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.244451 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:33:02.451115522 +0000 UTC Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.252641 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.265524 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.284819 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.297980 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.308452 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.321378 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.333506 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"2026-01-29T16:29:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529\\\\n2026-01-29T16:29:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529 to /host/opt/cni/bin/\\\\n2026-01-29T16:29:32Z [verbose] multus-daemon started\\\\n2026-01-29T16:29:32Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:30:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.343190 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.343234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.343246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.343264 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.343274 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.347290 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.362413 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.373734 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.385030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.405320 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.417045 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.433048 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.447584 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.449133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.449148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.449169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.449182 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.450978 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.469654 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.481042 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.492050 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.552877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.552939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.552950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.552965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.552974 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.655186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.655227 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.655235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.655251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.655260 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.675959 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/0.log" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.676024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerStarted","Data":"c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.692714 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.713446 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.725967 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.740612 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.753643 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.757067 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.757125 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.757142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.757160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.757171 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.768101 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.781958 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.794434 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.805506 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.818309 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.831757 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.852019 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.859504 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.859589 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.859615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.859634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.859646 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.867134 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.880195 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.890419 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.905332 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.918247 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"2026-01-29T16:29:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529\\\\n2026-01-29T16:29:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529 to /host/opt/cni/bin/\\\\n2026-01-29T16:29:32Z [verbose] multus-daemon started\\\\n2026-01-29T16:29:32Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:30:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.927802 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:18Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.961616 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.961641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.961649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.961661 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:18 crc kubenswrapper[4813]: I0129 16:30:18.961671 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:18Z","lastTransitionTime":"2026-01-29T16:30:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.065055 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.065146 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.065164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.065186 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.065202 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.167793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.167832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.167842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.167863 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.167879 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.238697 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:19 crc kubenswrapper[4813]: E0129 16:30:19.238841 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.244781 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:54:26.703346697 +0000 UTC Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.270764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.270809 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.270821 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.270840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.270853 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.373439 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.373484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.373506 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.373530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.373548 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.475997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.476041 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.476057 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.476075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.476086 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.579516 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.579555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.579564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.579580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.579592 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.681785 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.681820 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.681829 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.681843 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.681853 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.793068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.793137 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.793149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.793171 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.793185 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.894945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.894997 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.895006 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.895018 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.895028 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.997291 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.997342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.997360 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.997390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:19 crc kubenswrapper[4813]: I0129 16:30:19.997412 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:19Z","lastTransitionTime":"2026-01-29T16:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.100452 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.100520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.100538 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.100566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.100583 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.202796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.202835 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.202846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.202862 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.202872 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.239251 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:20 crc kubenswrapper[4813]: E0129 16:30:20.239395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.239628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:20 crc kubenswrapper[4813]: E0129 16:30:20.239689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.239938 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:20 crc kubenswrapper[4813]: E0129 16:30:20.240013 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.245502 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:04:59.337688004 +0000 UTC Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.305764 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.305800 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.305810 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.305826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.305836 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.408335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.408380 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.408389 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.408405 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.408414 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.512954 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.512994 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.513008 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.513023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.513034 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.615688 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.615748 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.615761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.615793 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.615805 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.718335 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.718638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.718815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.718939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.719062 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.821503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.821566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.821579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.821593 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.821603 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.923654 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.923903 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.924011 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.924133 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:20 crc kubenswrapper[4813]: I0129 16:30:20.924228 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:20Z","lastTransitionTime":"2026-01-29T16:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.026246 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.026278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.026286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.026298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.026308 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.129630 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.129668 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.129710 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.129728 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.129739 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.233206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.233255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.233266 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.233286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.233305 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.238607 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:21 crc kubenswrapper[4813]: E0129 16:30:21.238747 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.246005 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:23:45.565283934 +0000 UTC Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.336535 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.336578 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.336592 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.336614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.336638 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.439234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.439273 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.439283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.439298 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.439309 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.541680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.541727 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.541737 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.541756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.541769 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.644613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.644649 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.644658 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.644671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.644682 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.746792 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.746826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.746834 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.746846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.746855 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.848890 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.848927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.848939 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.848953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.848965 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.951525 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.951563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.951573 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.951588 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:21 crc kubenswrapper[4813]: I0129 16:30:21.951599 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:21Z","lastTransitionTime":"2026-01-29T16:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.054320 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.054377 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.054392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.054408 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.054418 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.156722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.156772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.156787 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.156803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.156812 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.239299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:22 crc kubenswrapper[4813]: E0129 16:30:22.239407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.239444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.239496 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.240218 4813 scope.go:117] "RemoveContainer" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" Jan 29 16:30:22 crc kubenswrapper[4813]: E0129 16:30:22.240516 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:22 crc kubenswrapper[4813]: E0129 16:30:22.240616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.246715 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:23:20.947585134 +0000 UTC Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.258447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.258484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.258493 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.258508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.258527 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.360898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.360950 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.360961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.360978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.360991 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.463438 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.463466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.463474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.463487 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.463497 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.565424 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.565466 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.565480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.565499 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.565512 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.668176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.668225 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.668240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.668261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.668278 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.690882 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/2.log" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.694169 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.695079 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.749552 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.763998 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"2026-01-29T16:29:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529\\\\n2026-01-29T16:29:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529 to /host/opt/cni/bin/\\\\n2026-01-29T16:29:32Z [verbose] multus-daemon started\\\\n2026-01-29T16:29:32Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:30:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.770978 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.771023 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.771035 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.771051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.771063 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.775608 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.788001 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.800333 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.811599 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.823794 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.836008 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.847030 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.865609 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.873689 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.873883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.873968 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.874063 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.874150 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.880522 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.893786 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.907227 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.921535 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.940272 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.952054 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.963695 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.975589 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:22Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.977288 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.977327 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.977338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.977352 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:22 crc kubenswrapper[4813]: I0129 16:30:22.977362 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:22Z","lastTransitionTime":"2026-01-29T16:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.079815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.080129 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.080259 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.080342 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.080427 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.183492 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.183818 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.183908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.183995 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.184135 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.239561 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:23 crc kubenswrapper[4813]: E0129 16:30:23.239752 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.247579 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 08:11:28.421644076 +0000 UTC Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.286846 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.287135 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.287231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.287418 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.287501 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.390003 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.390062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.390084 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.390149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.390175 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.492401 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.492480 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.492494 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.492514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.492530 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.595160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.595197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.595206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.595218 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.595231 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.697300 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.697374 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.697397 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.697425 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.697446 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.799597 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.799672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.799687 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.799706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.799749 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.901960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.902010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.902021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.902034 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:23 crc kubenswrapper[4813]: I0129 16:30:23.902043 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:23Z","lastTransitionTime":"2026-01-29T16:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.004411 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.004454 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.004469 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.004488 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.004505 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.106906 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.106940 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.106949 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.106963 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.106972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.209681 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.209904 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.210020 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.210089 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.210178 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.239272 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.239347 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:24 crc kubenswrapper[4813]: E0129 16:30:24.239411 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.239360 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:24 crc kubenswrapper[4813]: E0129 16:30:24.239561 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:24 crc kubenswrapper[4813]: E0129 16:30:24.239603 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.248874 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:38:26.709507161 +0000 UTC Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.313704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.313744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.313755 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.313768 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.313777 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.416284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.416328 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.416338 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.416354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.416364 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.519131 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.519165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.519173 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.519185 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.519212 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.622039 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.622081 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.622093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.622142 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.622167 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.724566 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.724671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.724708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.724774 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.724792 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.827751 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.827801 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.827813 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.827832 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.827845 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.932062 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.932097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.932130 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.932145 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:24 crc kubenswrapper[4813]: I0129 16:30:24.932155 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:24Z","lastTransitionTime":"2026-01-29T16:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.034830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.034897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.034907 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.034922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.034932 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.130520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.130575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.130594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.130620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.130640 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.147184 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.152170 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.152254 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.152274 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.152329 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.152349 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.174516 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.185037 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.185162 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.185188 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.185219 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.185238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.204633 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.211412 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.211484 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.211496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.211512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.211523 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.225727 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.229428 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.229474 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.229579 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.229825 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.229840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.239285 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.239438 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.241102 4813 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d38b52e6-6d60-4e06-b22e-49bd8ff8645c\\\",\\\"systemUUID\\\":\\\"b2b97708-a7a8-4117-92e9-16865fc3d92d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:25Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:25 crc kubenswrapper[4813]: E0129 16:30:25.241259 4813 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.242908 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.242944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.242955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.242973 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.242984 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.249362 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 09:57:12.290942486 +0000 UTC Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.344833 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.344865 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.344879 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.344894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.344907 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.447704 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.447736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.447744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.447758 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.447766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.550093 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.550160 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.550169 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.550183 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.550192 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.652802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.652842 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.652856 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.652872 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.652881 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.754858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.754901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.754912 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.754925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.754936 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.857286 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.857523 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.857706 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.857902 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.858097 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.960297 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.960341 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.960354 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.960372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:25 crc kubenswrapper[4813]: I0129 16:30:25.960386 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:25Z","lastTransitionTime":"2026-01-29T16:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.062680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.062711 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.062722 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.062736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.062747 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.165139 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.165176 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.165184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.165197 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.165209 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.239027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.239027 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:26 crc kubenswrapper[4813]: E0129 16:30:26.239181 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.239165 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:26 crc kubenswrapper[4813]: E0129 16:30:26.239236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:26 crc kubenswrapper[4813]: E0129 16:30:26.239402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.249610 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:15:27.021942013 +0000 UTC Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.267928 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.267980 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.268009 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.268026 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.268038 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.370644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.370887 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.370898 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.370913 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.370926 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.473244 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.473283 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.473295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.473316 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.473329 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.575307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.575346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.575358 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.575372 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.575382 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.677679 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.677712 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.677720 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.677734 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.677742 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.779672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.779746 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.779772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.779799 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.779816 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.883149 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.883223 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.883236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.883261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.883273 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.986795 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.986844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.986876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.986897 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:26 crc kubenswrapper[4813]: I0129 16:30:26.986910 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:26Z","lastTransitionTime":"2026-01-29T16:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.089802 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.089875 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.089901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.089938 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.089962 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.192961 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.192998 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.193007 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.193029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.193037 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.239538 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:27 crc kubenswrapper[4813]: E0129 16:30:27.239791 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.249835 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:21:06.077131885 +0000 UTC Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.297278 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.297351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.297369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.297399 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.297420 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.400349 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.400385 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.400393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.400406 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.400416 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.502029 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.502074 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.502085 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.502099 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.502129 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.604852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.604901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.604910 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.604922 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.604930 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.707271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.707314 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.707324 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.707346 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.707357 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.810090 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.810195 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.810214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.810233 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.810247 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.913355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.913435 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.913463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.913495 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:27 crc kubenswrapper[4813]: I0129 16:30:27.913518 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:27Z","lastTransitionTime":"2026-01-29T16:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.016226 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.016271 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.016284 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.016302 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.016319 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.118470 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.118771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.118837 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.118918 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.118981 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.221463 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.221503 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.221514 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.221530 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.221542 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.239046 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:28 crc kubenswrapper[4813]: E0129 16:30:28.239210 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.239238 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:28 crc kubenswrapper[4813]: E0129 16:30:28.239356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.239525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:28 crc kubenswrapper[4813]: E0129 16:30:28.239634 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.250801 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:58:39.441440682 +0000 UTC Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.252383 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.263158 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71cdf350-59d3-4d6f-8995-173528429b59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e51aaf01481487e2990aa127eb64f4d50479a381583437e8a89025f0bf05975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5pfvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-r269r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.273399 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nsttk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e35b844b-1645-458c-b117-f60fe6042abe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfnj4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nsttk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.300925 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf2fe979-4611-4ea3-be48-5f355a5ff0b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://841e0ce2e4667c419a76fefecb7035884770026d74894d5d586ac44230c4da2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0bd207da6435fe642ca9bb3090ac66b8c2c16ac0dd43f428cdab95db6e77f88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1fd98c27513833a109aa0351ff9ba0e660523a02ac1935075e6dd6844d58e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e17232c38d5f9b28fe08784f807e02758544186864b208835858445e7af8f60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://246aaf0941eb36fb44378bb11620e18b7c94defbe1b6db18162b5f8f64e1d0e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e032fcace603d8791508831356c40088a1a00e9dfb7325c4b02106ac24377920\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://468c9f7783ae8ecfa0a62a53bf46c1b36e1d4806c089e722c917a836b37dc9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc501e99d1aa732899891e1786c28a186ac62438feb7b7d1e6f58268a3d064e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.324656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.324763 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.324789 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.324817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.324834 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.335730 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-7cjx7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4acefc9f-f68a-4566-a0f5-656b961d4267\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:30:17Z\\\",\\\"message\\\":\\\"2026-01-29T16:29:32+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529\\\\n2026-01-29T16:29:32+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cfe9b90f-f3bb-4e7c-ab4b-c6109d480529 to /host/opt/cni/bin/\\\\n2026-01-29T16:29:32Z [verbose] multus-daemon started\\\\n2026-01-29T16:29:32Z [verbose] Readiness Indicator file check\\\\n2026-01-29T16:30:17Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:30:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shgjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-7cjx7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.367082 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7rrb4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb9c786-b3de-45db-8f65-712bcbbe8709\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e19db9bb62f1cdb195833d2bb43e93f27f325bad34be8df94dc11ff1c6f768\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7csg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7rrb4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.395457 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"281ab41c-3542-426a-817d-2f8bdf50b248\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd947dc0a8b840a1144faba2a799211698726bed011aff1bfdad713e427c2b3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec9812da5d45494d9e47222818b2651eb88524dcc018af5abdfadf92a5c9a86f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3373161a56d0021db02ae65549f13af5d345c9b527ce71e6f6fe90ea8966040f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.414909 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5812dc35-171f-4cad-8da9-9f17499b5131\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:30:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 16:29:27.805454 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 16:29:27.805602 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 16:29:27.806409 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2040343965/tls.crt::/tmp/serving-cert-2040343965/tls.key\\\\\\\"\\\\nI0129 16:29:28.000928 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 16:29:28.008745 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 16:29:28.008788 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 16:29:28.008814 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 16:29:28.008820 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 16:29:28.021619 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0129 16:29:28.021644 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 16:29:28.022273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 16:29:28.022400 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 16:29:28.022424 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 16:29:28.022450 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 16:29:28.022479 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 16:29:28.023686 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.427845 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.427876 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.427886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.427899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.427909 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.428499 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4319e83-35f3-467d-bcc0-ade49d870e88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e84e9a5bc78991ef442dee6df9b7520ddf5b56ba6e51123a8436e6e6d387806f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59711d3903504e7f6e0c92e198f996eaad115c99c0aac570a33b82c38a5623c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a62e9c1a94df4dd774662814146f79f86fe70be44bdebe7a3ad7259fcb1c79b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b39c1b32b5571d87d930e514c629c66afd44557a3ea5d2d56a3a01cbf7f1c554\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.443498 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4c032693a16c2dfcb3f9503f326f455f0598da1200836ea7f61dcdf18c21c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.461359 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.478808 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://652c028edb1a203a310882ca1ad1ae3da1bcb249595e599f9eeb4da26871bcf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4fc0233598426162c4b723941f0993f430338c40285a884cdc4cbdfdd93dd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.498616 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T16:29:55Z\\\",\\\"message\\\":\\\"Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973267 6536 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 16:29:55.973357 6536 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 16:29:55.973264 6536 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:30:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt4dq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cj2h6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.510304 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2987565-d79b-47ee-850d-774214a23f77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60c1b284fa7b0e7a49ca2a2bd480e629f6b4920a7466f88d42490c4605c89659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd38a450fbed6f278b26ca5d8a1f46d13e67c0d419f36b1014d6b8351a0646f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-df4hv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4dpk2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.521643 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe1fdaaf7bb60fb0455683a3e5894ace616039ccaefb0cbf7e752bc8c644e8c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.531010 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.531065 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.531076 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.531097 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.531131 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.537017 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-srj6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b262d6-5205-4aaf-85df-6ac3c03c5d93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52073a0425602b1b840024011c43dcb6901ecacf82a2fdf0529c36f705c6075d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thhjg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-srj6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.554838 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-58k2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2363cfc2-15b2-44f3-bd87-0e37a79ab157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16071f4b3a0c23a25b8974056d5460acf74c8a26816aea347c37479c3ee4673c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T16:29:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8814d10e1f5951aa345d1e7893f19cb0856593670e7a319b22334a80950070da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53600f8793601c5f8869b747b56fffb1b5493dc2bd4cec40f6eef74d5a3e3b86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0661f18771d97e17012fb4f4506d1f7840f0833aabb8bb37836131b5cf0c96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30e71cc74aa50d086a2bb1df540679e5212531cb76836b02a19086fe52b17dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4e1dc8ce067aaab1511e6a5d6f8c3f42f96270723f0f8e3e9708efd01e6e5e9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50347005e8eed652fcbc6435d2ecb3a01a0c177f971246bc5002f2b0bdcb9a9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T16:29:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T16:29:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqh4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T16:29:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-58k2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.567653 4813 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T16:29:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T16:30:28Z is after 2025-08-24T17:21:41Z" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.633315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.633362 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.633375 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.633393 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.633407 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.736073 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.736815 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.736900 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.736984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.737093 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.839888 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.839934 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.839944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.839960 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.839986 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.941672 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.941731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.941744 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.941761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:28 crc kubenswrapper[4813]: I0129 16:30:28.941772 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:28Z","lastTransitionTime":"2026-01-29T16:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.045251 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.045381 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.045407 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.045442 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.045467 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.148680 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.148740 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.148757 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.148775 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.148788 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.239316 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:29 crc kubenswrapper[4813]: E0129 16:30:29.239462 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251151 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:46:09.686372336 +0000 UTC Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251231 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251281 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251315 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.251325 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.354152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.354206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.354214 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.354230 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.354238 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.496196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.496447 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.496536 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.496613 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.496704 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.599656 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.599686 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.599703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.599717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.599727 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.702295 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.702351 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.702384 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.702413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.702434 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.805184 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.805213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.805222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.805234 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.805243 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.907546 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.907574 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.907582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.907594 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:29 crc kubenswrapper[4813]: I0129 16:30:29.907602 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:29Z","lastTransitionTime":"2026-01-29T16:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.010143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.010196 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.010213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.010235 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.010251 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.112964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.113033 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.113048 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.113072 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.113087 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.215780 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.216019 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.216088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.216174 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.216233 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.239294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.239371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:30 crc kubenswrapper[4813]: E0129 16:30:30.239405 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.239294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:30 crc kubenswrapper[4813]: E0129 16:30:30.239512 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:30 crc kubenswrapper[4813]: E0129 16:30:30.239549 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.251539 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:10:31.1969811 +0000 UTC Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.319148 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.319497 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.319731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.319826 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.319898 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.422175 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.422213 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.422222 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.422236 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.422245 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.524427 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.524461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.524471 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.524485 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.524493 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.627088 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.627192 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.627206 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.627221 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.627232 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.729194 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.729232 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.729241 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.729255 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.729264 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.831905 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.831945 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.831953 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.831965 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.831974 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.933851 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.934193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.934337 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.934433 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:30 crc kubenswrapper[4813]: I0129 16:30:30.934527 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:30Z","lastTransitionTime":"2026-01-29T16:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.037969 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.038321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.038413 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.038500 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.038579 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.141012 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.141305 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.141382 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.141462 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.141565 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.239302 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:31 crc kubenswrapper[4813]: E0129 16:30:31.239560 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.244701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.244927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.245016 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.245143 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.245247 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.252281 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:51:49.514028998 +0000 UTC Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.348564 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.348840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.348915 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.348993 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.349067 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.451828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.451877 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.451899 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.451917 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.451929 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.555660 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.555703 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.555716 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.555731 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.555744 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.658796 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.658858 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.658868 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.658883 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.658895 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.761638 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.761696 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.761708 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.761761 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.761773 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.864671 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.864707 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.864733 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.864754 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.864770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.967321 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.967359 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.967369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.967383 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:31 crc kubenswrapper[4813]: I0129 16:30:31.967393 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:31Z","lastTransitionTime":"2026-01-29T16:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.021205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.021481 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:36.021450397 +0000 UTC m=+148.508653613 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.070261 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.070307 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.070318 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.070334 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.070345 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.122075 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.122169 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.122203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.122248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122306 4813 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122369 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122392 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122409 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122415 4813 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122428 4813 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122423 4813 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122497 4813 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122408 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:31:36.122386463 +0000 UTC m=+148.609589729 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122606 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 16:31:36.12256872 +0000 UTC m=+148.609772026 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122636 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 16:31:36.122627772 +0000 UTC m=+148.609831078 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.122649 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:31:36.122643083 +0000 UTC m=+148.609846419 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.173461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.173524 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.173537 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.173553 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.173564 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.239335 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.239340 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.239502 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.239353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.239616 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:32 crc kubenswrapper[4813]: E0129 16:30:32.239836 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.253040 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:48:36.513151775 +0000 UTC Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.275600 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.275643 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.275653 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.275665 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.275678 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.378444 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.378550 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.378563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.378580 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.378591 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.481201 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.481240 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.481249 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.481263 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.481272 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.583348 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.583392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.583402 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.583416 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.583426 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.685927 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.685959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.685967 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.685981 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.685990 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.787508 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.787551 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.787561 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.787575 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.787584 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.889366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.889394 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.889403 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.889415 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.889423 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.992021 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.992152 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.992165 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.992182 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:32 crc kubenswrapper[4813]: I0129 16:30:32.992194 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:32Z","lastTransitionTime":"2026-01-29T16:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.094392 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.094420 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.094429 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.094443 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.094453 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.196771 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.196808 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.196817 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.196830 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.196840 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.239531 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:33 crc kubenswrapper[4813]: E0129 16:30:33.239678 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.253962 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:47:29.502775122 +0000 UTC Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.298767 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.298805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.298814 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.298828 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.298837 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.401555 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.401609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.401620 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.401641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.401654 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.503984 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.504040 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.504051 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.504068 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.504079 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.606417 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.606461 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.606472 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.606512 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.606526 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.708582 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.708614 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.708622 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.708634 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.708644 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.810605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.810644 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.810655 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.810670 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.810679 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.913563 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.913605 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.913615 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.913632 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:33 crc kubenswrapper[4813]: I0129 16:30:33.913643 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:33Z","lastTransitionTime":"2026-01-29T16:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.016445 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.016531 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.016570 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.016756 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.016770 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.119854 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.119921 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.119935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.119964 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.119978 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.222725 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.222779 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.222790 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.222805 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.222814 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.239283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.239586 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.239639 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:34 crc kubenswrapper[4813]: E0129 16:30:34.240303 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:34 crc kubenswrapper[4813]: E0129 16:30:34.240590 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:34 crc kubenswrapper[4813]: E0129 16:30:34.240701 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.255047 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:46:26.890877864 +0000 UTC Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.326369 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.326455 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.326496 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.326520 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.326537 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.429100 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.429164 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.429177 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.429193 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.429205 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.531881 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.531914 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.531925 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.531942 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.531954 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.633609 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.633652 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.633685 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.633701 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.633711 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.736313 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.736355 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.736370 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.736390 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.736403 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.839641 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.839683 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.839698 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.839717 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.839732 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.942772 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.942816 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.942827 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.942844 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:34 crc kubenswrapper[4813]: I0129 16:30:34.942857 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:34Z","lastTransitionTime":"2026-01-29T16:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.044824 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.044867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.044882 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.044901 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.044914 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.147663 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.147724 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.147736 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.147753 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.147766 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.239593 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:35 crc kubenswrapper[4813]: E0129 16:30:35.239713 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.250323 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.250366 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.250376 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.250391 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.250405 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.255972 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:33:36.306329674 +0000 UTC Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.353803 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.353852 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.353867 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.353886 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.353899 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.455894 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.455935 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.455944 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.455959 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.455972 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.470840 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.470955 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.470983 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.471075 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.471176 4813 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T16:30:35Z","lastTransitionTime":"2026-01-29T16:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.527037 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8"] Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.527752 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.530965 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.531863 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.531947 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.532205 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.550474 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=64.55045133 podStartE2EDuration="1m4.55045133s" podCreationTimestamp="2026-01-29 16:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.547981459 +0000 UTC m=+88.035184695" watchObservedRunningTime="2026-01-29 16:30:35.55045133 +0000 UTC m=+88.037654556" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.557951 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3cbb2e-2a56-4226-a802-b66ede31d41d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.557999 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.558014 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a3cbb2e-2a56-4226-a802-b66ede31d41d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.558029 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.558091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a3cbb2e-2a56-4226-a802-b66ede31d41d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.578088 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-7cjx7" podStartSLOduration=67.578069793 podStartE2EDuration="1m7.578069793s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.566577447 +0000 UTC m=+88.053780653" watchObservedRunningTime="2026-01-29 16:30:35.578069793 +0000 UTC m=+88.065272999" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.592077 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7rrb4" podStartSLOduration=67.5920559 podStartE2EDuration="1m7.5920559s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.578440766 +0000 UTC m=+88.065643982" watchObservedRunningTime="2026-01-29 16:30:35.5920559 +0000 UTC m=+88.079259116" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.635208 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4dpk2" podStartSLOduration=66.635191257 podStartE2EDuration="1m6.635191257s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.634890406 +0000 UTC m=+88.122093622" watchObservedRunningTime="2026-01-29 16:30:35.635191257 +0000 UTC m=+88.122394473" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.635354 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podStartSLOduration=66.635349053 podStartE2EDuration="1m6.635349053s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.62204296 +0000 UTC m=+88.109246176" watchObservedRunningTime="2026-01-29 16:30:35.635349053 +0000 UTC m=+88.122552269" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.658995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3cbb2e-2a56-4226-a802-b66ede31d41d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659080 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a3cbb2e-2a56-4226-a802-b66ede31d41d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a3cbb2e-2a56-4226-a802-b66ede31d41d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.659881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4a3cbb2e-2a56-4226-a802-b66ede31d41d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.660181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a3cbb2e-2a56-4226-a802-b66ede31d41d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.667091 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3cbb2e-2a56-4226-a802-b66ede31d41d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.676404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a3cbb2e-2a56-4226-a802-b66ede31d41d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpzv8\" (UID: \"4a3cbb2e-2a56-4226-a802-b66ede31d41d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.677895 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.677878467 podStartE2EDuration="1m7.677878467s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.653452083 +0000 UTC m=+88.140655309" watchObservedRunningTime="2026-01-29 16:30:35.677878467 +0000 UTC m=+88.165081683" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.678534 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.678527501 podStartE2EDuration="40.678527501s" podCreationTimestamp="2026-01-29 16:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.676739675 +0000 UTC m=+88.163942911" watchObservedRunningTime="2026-01-29 16:30:35.678527501 +0000 UTC m=+88.165730717" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.743574 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-srj6p" podStartSLOduration=67.743545418 podStartE2EDuration="1m7.743545418s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.742935956 +0000 UTC m=+88.230139172" watchObservedRunningTime="2026-01-29 16:30:35.743545418 +0000 UTC m=+88.230748634" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.761097 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-58k2s" podStartSLOduration=67.761070957 podStartE2EDuration="1m7.761070957s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.759910034 +0000 UTC m=+88.247113250" watchObservedRunningTime="2026-01-29 16:30:35.761070957 +0000 UTC m=+88.248274173" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.796766 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=66.796728547 podStartE2EDuration="1m6.796728547s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.794957351 +0000 UTC m=+88.282160567" watchObservedRunningTime="2026-01-29 16:30:35.796728547 +0000 UTC m=+88.283931763" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.826365 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podStartSLOduration=67.826333133 podStartE2EDuration="1m7.826333133s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:35.825309355 +0000 UTC m=+88.312512581" watchObservedRunningTime="2026-01-29 16:30:35.826333133 +0000 UTC m=+88.313536349" Jan 29 16:30:35 crc kubenswrapper[4813]: I0129 16:30:35.843264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" Jan 29 16:30:35 crc kubenswrapper[4813]: W0129 16:30:35.859372 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3cbb2e_2a56_4226_a802_b66ede31d41d.slice/crio-ce2f4b2be801253801d6209ab1b18f30523906c36f3b4d3c924e970339bf9642 WatchSource:0}: Error finding container ce2f4b2be801253801d6209ab1b18f30523906c36f3b4d3c924e970339bf9642: Status 404 returned error can't find the container with id ce2f4b2be801253801d6209ab1b18f30523906c36f3b4d3c924e970339bf9642 Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.239337 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.239378 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.239462 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:36 crc kubenswrapper[4813]: E0129 16:30:36.239532 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:36 crc kubenswrapper[4813]: E0129 16:30:36.239729 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:36 crc kubenswrapper[4813]: E0129 16:30:36.239800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.256439 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 15:12:01.601478515 +0000 UTC Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.256519 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.266843 4813 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.738323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" event={"ID":"4a3cbb2e-2a56-4226-a802-b66ede31d41d","Type":"ContainerStarted","Data":"7db852c538eea07b89b01511c666a6a438cc7efa4c2cb067f30461e8fc3aadaf"} Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.738369 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" event={"ID":"4a3cbb2e-2a56-4226-a802-b66ede31d41d","Type":"ContainerStarted","Data":"ce2f4b2be801253801d6209ab1b18f30523906c36f3b4d3c924e970339bf9642"} Jan 29 16:30:36 crc kubenswrapper[4813]: I0129 16:30:36.750910 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpzv8" podStartSLOduration=68.750891218 podStartE2EDuration="1m8.750891218s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:36.750341767 +0000 UTC m=+89.237545003" watchObservedRunningTime="2026-01-29 16:30:36.750891218 +0000 UTC m=+89.238094434" Jan 29 16:30:37 crc kubenswrapper[4813]: I0129 16:30:37.239394 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:37 crc kubenswrapper[4813]: E0129 16:30:37.239547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:38 crc kubenswrapper[4813]: I0129 16:30:38.243924 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:38 crc kubenswrapper[4813]: E0129 16:30:38.244042 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:38 crc kubenswrapper[4813]: I0129 16:30:38.244311 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:38 crc kubenswrapper[4813]: E0129 16:30:38.244395 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:38 crc kubenswrapper[4813]: I0129 16:30:38.244576 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:38 crc kubenswrapper[4813]: E0129 16:30:38.244664 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:39 crc kubenswrapper[4813]: I0129 16:30:39.239344 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:39 crc kubenswrapper[4813]: E0129 16:30:39.239481 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:40 crc kubenswrapper[4813]: I0129 16:30:40.239019 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:40 crc kubenswrapper[4813]: I0129 16:30:40.239142 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:40 crc kubenswrapper[4813]: I0129 16:30:40.239596 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:40 crc kubenswrapper[4813]: E0129 16:30:40.239796 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:40 crc kubenswrapper[4813]: E0129 16:30:40.240022 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:40 crc kubenswrapper[4813]: E0129 16:30:40.240204 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:40 crc kubenswrapper[4813]: I0129 16:30:40.250829 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 16:30:40 crc kubenswrapper[4813]: I0129 16:30:40.335365 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" probeResult="failure" output="" Jan 29 16:30:41 crc kubenswrapper[4813]: I0129 16:30:41.238998 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:41 crc kubenswrapper[4813]: E0129 16:30:41.239180 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:42 crc kubenswrapper[4813]: I0129 16:30:42.239061 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:42 crc kubenswrapper[4813]: I0129 16:30:42.239067 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:42 crc kubenswrapper[4813]: E0129 16:30:42.239208 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:42 crc kubenswrapper[4813]: I0129 16:30:42.239297 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:42 crc kubenswrapper[4813]: E0129 16:30:42.239423 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:42 crc kubenswrapper[4813]: E0129 16:30:42.239540 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.240011 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:43 crc kubenswrapper[4813]: E0129 16:30:43.240170 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.760723 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/3.log" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.762000 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/2.log" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.764426 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" exitCode=1 Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.764462 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609"} Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.764501 4813 scope.go:117] "RemoveContainer" containerID="a196cc78aced77c33fb0fac802d2349477768742fa19575b4e06e0437a041382" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.765236 4813 scope.go:117] "RemoveContainer" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" Jan 29 16:30:43 crc kubenswrapper[4813]: E0129 16:30:43.765407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:30:43 crc kubenswrapper[4813]: I0129 16:30:43.800362 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.800340497 podStartE2EDuration="3.800340497s" podCreationTimestamp="2026-01-29 16:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:30:43.79987961 +0000 UTC m=+96.287082826" watchObservedRunningTime="2026-01-29 16:30:43.800340497 +0000 UTC m=+96.287543713" Jan 29 16:30:44 crc kubenswrapper[4813]: I0129 16:30:44.239394 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:44 crc kubenswrapper[4813]: I0129 16:30:44.239471 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:44 crc kubenswrapper[4813]: I0129 16:30:44.239560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:44 crc kubenswrapper[4813]: E0129 16:30:44.239629 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:44 crc kubenswrapper[4813]: E0129 16:30:44.239745 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:44 crc kubenswrapper[4813]: E0129 16:30:44.239892 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:44 crc kubenswrapper[4813]: I0129 16:30:44.770366 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/3.log" Jan 29 16:30:45 crc kubenswrapper[4813]: I0129 16:30:45.238635 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:45 crc kubenswrapper[4813]: E0129 16:30:45.238772 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:46 crc kubenswrapper[4813]: I0129 16:30:46.238756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:46 crc kubenswrapper[4813]: E0129 16:30:46.238907 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:46 crc kubenswrapper[4813]: I0129 16:30:46.239184 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:46 crc kubenswrapper[4813]: E0129 16:30:46.239270 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:46 crc kubenswrapper[4813]: I0129 16:30:46.239496 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:46 crc kubenswrapper[4813]: E0129 16:30:46.239602 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:47 crc kubenswrapper[4813]: I0129 16:30:47.173282 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:47 crc kubenswrapper[4813]: E0129 16:30:47.173484 4813 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:30:47 crc kubenswrapper[4813]: E0129 16:30:47.173565 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs podName:e35b844b-1645-458c-b117-f60fe6042abe nodeName:}" failed. No retries permitted until 2026-01-29 16:31:51.173542023 +0000 UTC m=+163.660745309 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs") pod "network-metrics-daemon-nsttk" (UID: "e35b844b-1645-458c-b117-f60fe6042abe") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 16:30:47 crc kubenswrapper[4813]: I0129 16:30:47.239427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:47 crc kubenswrapper[4813]: E0129 16:30:47.239612 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:48 crc kubenswrapper[4813]: I0129 16:30:48.239552 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:48 crc kubenswrapper[4813]: E0129 16:30:48.239696 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:48 crc kubenswrapper[4813]: I0129 16:30:48.239577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:48 crc kubenswrapper[4813]: I0129 16:30:48.239562 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:48 crc kubenswrapper[4813]: E0129 16:30:48.241363 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:48 crc kubenswrapper[4813]: E0129 16:30:48.241475 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:49 crc kubenswrapper[4813]: I0129 16:30:49.239348 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:49 crc kubenswrapper[4813]: E0129 16:30:49.239466 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:50 crc kubenswrapper[4813]: I0129 16:30:50.238878 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:50 crc kubenswrapper[4813]: E0129 16:30:50.239056 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:50 crc kubenswrapper[4813]: I0129 16:30:50.239211 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:50 crc kubenswrapper[4813]: I0129 16:30:50.239289 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:50 crc kubenswrapper[4813]: E0129 16:30:50.239455 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:50 crc kubenswrapper[4813]: E0129 16:30:50.239765 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:51 crc kubenswrapper[4813]: I0129 16:30:51.239298 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:51 crc kubenswrapper[4813]: E0129 16:30:51.239437 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:52 crc kubenswrapper[4813]: I0129 16:30:52.239483 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:52 crc kubenswrapper[4813]: I0129 16:30:52.239621 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:52 crc kubenswrapper[4813]: I0129 16:30:52.239498 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:52 crc kubenswrapper[4813]: E0129 16:30:52.239782 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:52 crc kubenswrapper[4813]: E0129 16:30:52.239944 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:52 crc kubenswrapper[4813]: E0129 16:30:52.240069 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:53 crc kubenswrapper[4813]: I0129 16:30:53.239617 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:53 crc kubenswrapper[4813]: E0129 16:30:53.239804 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:54 crc kubenswrapper[4813]: I0129 16:30:54.239684 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:54 crc kubenswrapper[4813]: I0129 16:30:54.239761 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:54 crc kubenswrapper[4813]: E0129 16:30:54.239822 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:54 crc kubenswrapper[4813]: I0129 16:30:54.239703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:54 crc kubenswrapper[4813]: E0129 16:30:54.239863 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:54 crc kubenswrapper[4813]: E0129 16:30:54.239933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:55 crc kubenswrapper[4813]: I0129 16:30:55.238895 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:55 crc kubenswrapper[4813]: E0129 16:30:55.239487 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:56 crc kubenswrapper[4813]: I0129 16:30:56.239387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:56 crc kubenswrapper[4813]: I0129 16:30:56.239536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:56 crc kubenswrapper[4813]: I0129 16:30:56.239718 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:56 crc kubenswrapper[4813]: E0129 16:30:56.240217 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:56 crc kubenswrapper[4813]: E0129 16:30:56.240369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:56 crc kubenswrapper[4813]: E0129 16:30:56.240449 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:57 crc kubenswrapper[4813]: I0129 16:30:57.239199 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:57 crc kubenswrapper[4813]: E0129 16:30:57.239615 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:30:57 crc kubenswrapper[4813]: I0129 16:30:57.240917 4813 scope.go:117] "RemoveContainer" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" Jan 29 16:30:57 crc kubenswrapper[4813]: E0129 16:30:57.241261 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:30:58 crc kubenswrapper[4813]: I0129 16:30:58.239407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:30:58 crc kubenswrapper[4813]: I0129 16:30:58.239464 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:30:58 crc kubenswrapper[4813]: I0129 16:30:58.239418 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:30:58 crc kubenswrapper[4813]: E0129 16:30:58.240482 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:30:58 crc kubenswrapper[4813]: E0129 16:30:58.240548 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:30:58 crc kubenswrapper[4813]: E0129 16:30:58.240630 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:30:59 crc kubenswrapper[4813]: I0129 16:30:59.239440 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:30:59 crc kubenswrapper[4813]: E0129 16:30:59.240333 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:00 crc kubenswrapper[4813]: I0129 16:31:00.239679 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:00 crc kubenswrapper[4813]: I0129 16:31:00.239724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:00 crc kubenswrapper[4813]: E0129 16:31:00.239850 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:00 crc kubenswrapper[4813]: I0129 16:31:00.239885 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:00 crc kubenswrapper[4813]: E0129 16:31:00.239992 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:00 crc kubenswrapper[4813]: E0129 16:31:00.240285 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:01 crc kubenswrapper[4813]: I0129 16:31:01.239648 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:01 crc kubenswrapper[4813]: E0129 16:31:01.239814 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:02 crc kubenswrapper[4813]: I0129 16:31:02.238972 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:02 crc kubenswrapper[4813]: I0129 16:31:02.239054 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:02 crc kubenswrapper[4813]: E0129 16:31:02.239124 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:02 crc kubenswrapper[4813]: I0129 16:31:02.239139 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:02 crc kubenswrapper[4813]: E0129 16:31:02.239216 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:02 crc kubenswrapper[4813]: E0129 16:31:02.239298 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.239138 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:03 crc kubenswrapper[4813]: E0129 16:31:03.239521 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.830949 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/1.log" Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.831512 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/0.log" Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.831590 4813 generic.go:334] "Generic (PLEG): container finished" podID="4acefc9f-f68a-4566-a0f5-656b961d4267" containerID="c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753" exitCode=1 Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.831640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerDied","Data":"c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753"} Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.831688 4813 scope.go:117] "RemoveContainer" containerID="f684b21bbaf1e0aa2be321650067acb57f13d7eac6cb317e4666832582e9f324" Jan 29 16:31:03 crc kubenswrapper[4813]: I0129 16:31:03.832205 4813 scope.go:117] "RemoveContainer" containerID="c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753" Jan 29 16:31:03 crc kubenswrapper[4813]: E0129 16:31:03.832426 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-7cjx7_openshift-multus(4acefc9f-f68a-4566-a0f5-656b961d4267)\"" pod="openshift-multus/multus-7cjx7" podUID="4acefc9f-f68a-4566-a0f5-656b961d4267" Jan 29 16:31:04 crc kubenswrapper[4813]: I0129 16:31:04.239565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:04 crc kubenswrapper[4813]: I0129 16:31:04.239675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:04 crc kubenswrapper[4813]: E0129 16:31:04.239749 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:04 crc kubenswrapper[4813]: E0129 16:31:04.239920 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:04 crc kubenswrapper[4813]: I0129 16:31:04.240798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:04 crc kubenswrapper[4813]: E0129 16:31:04.241005 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:04 crc kubenswrapper[4813]: I0129 16:31:04.839148 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/1.log" Jan 29 16:31:05 crc kubenswrapper[4813]: I0129 16:31:05.239249 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:05 crc kubenswrapper[4813]: E0129 16:31:05.239547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:06 crc kubenswrapper[4813]: I0129 16:31:06.239332 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:06 crc kubenswrapper[4813]: I0129 16:31:06.239352 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:06 crc kubenswrapper[4813]: E0129 16:31:06.239450 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:06 crc kubenswrapper[4813]: E0129 16:31:06.239526 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:06 crc kubenswrapper[4813]: I0129 16:31:06.239894 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:06 crc kubenswrapper[4813]: E0129 16:31:06.240747 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:07 crc kubenswrapper[4813]: I0129 16:31:07.239296 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:07 crc kubenswrapper[4813]: E0129 16:31:07.239516 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:08 crc kubenswrapper[4813]: E0129 16:31:08.193615 4813 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 16:31:08 crc kubenswrapper[4813]: I0129 16:31:08.240419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:08 crc kubenswrapper[4813]: E0129 16:31:08.240553 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:08 crc kubenswrapper[4813]: I0129 16:31:08.241536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:08 crc kubenswrapper[4813]: E0129 16:31:08.242098 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:08 crc kubenswrapper[4813]: I0129 16:31:08.242323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:08 crc kubenswrapper[4813]: E0129 16:31:08.242682 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:08 crc kubenswrapper[4813]: E0129 16:31:08.357073 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:31:09 crc kubenswrapper[4813]: I0129 16:31:09.239309 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:09 crc kubenswrapper[4813]: E0129 16:31:09.239406 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:09 crc kubenswrapper[4813]: I0129 16:31:09.239610 4813 scope.go:117] "RemoveContainer" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" Jan 29 16:31:09 crc kubenswrapper[4813]: E0129 16:31:09.239738 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-cj2h6_openshift-ovn-kubernetes(b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" Jan 29 16:31:10 crc kubenswrapper[4813]: I0129 16:31:10.239230 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:10 crc kubenswrapper[4813]: I0129 16:31:10.239278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:10 crc kubenswrapper[4813]: I0129 16:31:10.239287 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:10 crc kubenswrapper[4813]: E0129 16:31:10.239379 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:10 crc kubenswrapper[4813]: E0129 16:31:10.239606 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:10 crc kubenswrapper[4813]: E0129 16:31:10.239702 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:11 crc kubenswrapper[4813]: I0129 16:31:11.239334 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:11 crc kubenswrapper[4813]: E0129 16:31:11.239458 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:12 crc kubenswrapper[4813]: I0129 16:31:12.239290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:12 crc kubenswrapper[4813]: I0129 16:31:12.239359 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:12 crc kubenswrapper[4813]: E0129 16:31:12.239442 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:12 crc kubenswrapper[4813]: I0129 16:31:12.239459 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:12 crc kubenswrapper[4813]: E0129 16:31:12.239536 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:12 crc kubenswrapper[4813]: E0129 16:31:12.239613 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:13 crc kubenswrapper[4813]: I0129 16:31:13.239254 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:13 crc kubenswrapper[4813]: E0129 16:31:13.239402 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:13 crc kubenswrapper[4813]: E0129 16:31:13.358735 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:31:14 crc kubenswrapper[4813]: I0129 16:31:14.239131 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:14 crc kubenswrapper[4813]: I0129 16:31:14.239132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:14 crc kubenswrapper[4813]: E0129 16:31:14.239281 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:14 crc kubenswrapper[4813]: I0129 16:31:14.239324 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:14 crc kubenswrapper[4813]: E0129 16:31:14.239421 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:14 crc kubenswrapper[4813]: E0129 16:31:14.239547 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:15 crc kubenswrapper[4813]: I0129 16:31:15.239353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:15 crc kubenswrapper[4813]: E0129 16:31:15.239498 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:16 crc kubenswrapper[4813]: I0129 16:31:16.239290 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:16 crc kubenswrapper[4813]: I0129 16:31:16.239377 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:16 crc kubenswrapper[4813]: E0129 16:31:16.239417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:16 crc kubenswrapper[4813]: E0129 16:31:16.239502 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:16 crc kubenswrapper[4813]: I0129 16:31:16.239650 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:16 crc kubenswrapper[4813]: E0129 16:31:16.239700 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:17 crc kubenswrapper[4813]: I0129 16:31:17.239281 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:17 crc kubenswrapper[4813]: E0129 16:31:17.239415 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:18 crc kubenswrapper[4813]: I0129 16:31:18.239548 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:18 crc kubenswrapper[4813]: I0129 16:31:18.239599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:18 crc kubenswrapper[4813]: E0129 16:31:18.242683 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:18 crc kubenswrapper[4813]: I0129 16:31:18.242717 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:18 crc kubenswrapper[4813]: E0129 16:31:18.242859 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:18 crc kubenswrapper[4813]: E0129 16:31:18.242934 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:18 crc kubenswrapper[4813]: E0129 16:31:18.359464 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:31:19 crc kubenswrapper[4813]: I0129 16:31:19.239555 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:19 crc kubenswrapper[4813]: E0129 16:31:19.239819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:19 crc kubenswrapper[4813]: I0129 16:31:19.239961 4813 scope.go:117] "RemoveContainer" containerID="c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753" Jan 29 16:31:19 crc kubenswrapper[4813]: I0129 16:31:19.887987 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/1.log" Jan 29 16:31:19 crc kubenswrapper[4813]: I0129 16:31:19.888312 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerStarted","Data":"8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140"} Jan 29 16:31:20 crc kubenswrapper[4813]: I0129 16:31:20.238724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:20 crc kubenswrapper[4813]: E0129 16:31:20.239155 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:20 crc kubenswrapper[4813]: I0129 16:31:20.238905 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:20 crc kubenswrapper[4813]: I0129 16:31:20.238781 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:20 crc kubenswrapper[4813]: E0129 16:31:20.239796 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:20 crc kubenswrapper[4813]: E0129 16:31:20.239594 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:21 crc kubenswrapper[4813]: I0129 16:31:21.239216 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:21 crc kubenswrapper[4813]: E0129 16:31:21.239349 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:22 crc kubenswrapper[4813]: I0129 16:31:22.239137 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:22 crc kubenswrapper[4813]: I0129 16:31:22.239176 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:22 crc kubenswrapper[4813]: I0129 16:31:22.239188 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:22 crc kubenswrapper[4813]: E0129 16:31:22.239338 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:22 crc kubenswrapper[4813]: E0129 16:31:22.239444 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:22 crc kubenswrapper[4813]: E0129 16:31:22.239620 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:23 crc kubenswrapper[4813]: I0129 16:31:23.238944 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:23 crc kubenswrapper[4813]: E0129 16:31:23.239100 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:23 crc kubenswrapper[4813]: E0129 16:31:23.360532 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.239266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.239292 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:24 crc kubenswrapper[4813]: E0129 16:31:24.239387 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.239401 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:24 crc kubenswrapper[4813]: E0129 16:31:24.239715 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:24 crc kubenswrapper[4813]: E0129 16:31:24.239785 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.240076 4813 scope.go:117] "RemoveContainer" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.904101 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/3.log" Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.907052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerStarted","Data":"4b1ad71bda3ef6253797677910c06ead6fe138502b74dd7e52b23bd3f946c8eb"} Jan 29 16:31:24 crc kubenswrapper[4813]: I0129 16:31:24.907505 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:31:25 crc kubenswrapper[4813]: I0129 16:31:25.238584 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:25 crc kubenswrapper[4813]: E0129 16:31:25.238708 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:25 crc kubenswrapper[4813]: I0129 16:31:25.368327 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nsttk"] Jan 29 16:31:25 crc kubenswrapper[4813]: I0129 16:31:25.911690 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:25 crc kubenswrapper[4813]: E0129 16:31:25.912053 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:26 crc kubenswrapper[4813]: I0129 16:31:26.239274 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:26 crc kubenswrapper[4813]: E0129 16:31:26.239688 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:26 crc kubenswrapper[4813]: I0129 16:31:26.239404 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:26 crc kubenswrapper[4813]: I0129 16:31:26.239512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:26 crc kubenswrapper[4813]: E0129 16:31:26.239944 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:26 crc kubenswrapper[4813]: E0129 16:31:26.240103 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:28 crc kubenswrapper[4813]: I0129 16:31:28.238858 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:28 crc kubenswrapper[4813]: I0129 16:31:28.238932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:28 crc kubenswrapper[4813]: I0129 16:31:28.238967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:28 crc kubenswrapper[4813]: E0129 16:31:28.240184 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:28 crc kubenswrapper[4813]: I0129 16:31:28.240221 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:28 crc kubenswrapper[4813]: E0129 16:31:28.240373 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:28 crc kubenswrapper[4813]: E0129 16:31:28.240399 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:28 crc kubenswrapper[4813]: E0129 16:31:28.240483 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:28 crc kubenswrapper[4813]: E0129 16:31:28.361043 4813 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.238743 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.238830 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.238762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:30 crc kubenswrapper[4813]: E0129 16:31:30.239034 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:30 crc kubenswrapper[4813]: E0129 16:31:30.239235 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:30 crc kubenswrapper[4813]: E0129 16:31:30.239420 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.239612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:30 crc kubenswrapper[4813]: E0129 16:31:30.239798 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.239996 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:31:30 crc kubenswrapper[4813]: I0129 16:31:30.240059 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:31:32 crc kubenswrapper[4813]: I0129 16:31:32.240443 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:32 crc kubenswrapper[4813]: I0129 16:31:32.240549 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:32 crc kubenswrapper[4813]: E0129 16:31:32.240625 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nsttk" podUID="e35b844b-1645-458c-b117-f60fe6042abe" Jan 29 16:31:32 crc kubenswrapper[4813]: I0129 16:31:32.240690 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:32 crc kubenswrapper[4813]: I0129 16:31:32.240751 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:32 crc kubenswrapper[4813]: E0129 16:31:32.240888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 16:31:32 crc kubenswrapper[4813]: E0129 16:31:32.241067 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 16:31:32 crc kubenswrapper[4813]: E0129 16:31:32.241263 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.239306 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.239353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.239353 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.239607 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.244642 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.244690 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.244831 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.244831 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.244854 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:31:34 crc kubenswrapper[4813]: I0129 16:31:34.249895 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.117277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:36 crc kubenswrapper[4813]: E0129 16:31:36.117539 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:33:38.11751031 +0000 UTC m=+270.604713526 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.219268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.219323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.219348 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.219370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.221144 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.225452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.225562 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.226057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.366499 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.373200 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.381181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:36 crc kubenswrapper[4813]: W0129 16:31:36.628457 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-b91dd847ceefb470539d0372f61f6bc3c99c008ace3f456053159cd75ac89cd0 WatchSource:0}: Error finding container b91dd847ceefb470539d0372f61f6bc3c99c008ace3f456053159cd75ac89cd0: Status 404 returned error can't find the container with id b91dd847ceefb470539d0372f61f6bc3c99c008ace3f456053159cd75ac89cd0 Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.677513 4813 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.722509 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.722787 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwsk"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.722957 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.723208 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.724165 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.724740 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.725412 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.726093 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.726156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gzgqp"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.727212 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.738204 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.738987 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4zm4s"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.739355 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.739775 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.739891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.739992 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.741265 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.741939 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.750328 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.750443 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.750989 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.751806 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.752456 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.752550 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.753089 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.756784 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757221 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757340 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757432 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757535 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757655 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.757750 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.758554 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.758820 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.760136 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.760381 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.766160 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.766759 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.767359 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.767688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.768084 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.769541 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.769619 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.770690 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.770750 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xr6mc"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.772172 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.800450 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.802475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803099 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803549 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803643 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803718 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803783 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803790 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jpjx8"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803856 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.803944 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804061 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804157 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804332 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804512 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804833 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.804886 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.805502 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.805765 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.805850 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-px5lt"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.805765 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.806658 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.807520 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.808073 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.809256 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.809370 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.809464 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.809644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.810553 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.810608 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.810773 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.810871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.811283 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.811758 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.811892 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.812103 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.812625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.812894 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.815451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.821820 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ktp2w"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.822733 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.823590 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826501 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-client\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxgf\" (UniqueName: \"kubernetes.io/projected/d0eb230b-052d-4248-a827-a4b9a58281e3-kube-api-access-fxxgf\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826641 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5b89\" (UniqueName: \"kubernetes.io/projected/2e66c18c-dc70-48a8-bb7d-136c2c567966-kube-api-access-t5b89\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826657 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc86c\" (UniqueName: \"kubernetes.io/projected/5227068b-89fa-4779-b1e5-4f3fab7814e9-kube-api-access-jc86c\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-auth-proxy-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e66c18c-dc70-48a8-bb7d-136c2c567966-serving-cert\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-policies\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826736 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-dir\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826752 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jrt5\" (UniqueName: \"kubernetes.io/projected/d493bfd2-5953-4b97-a9e2-fb57cec31521-kube-api-access-9jrt5\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-config\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826825 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-machine-approver-tls\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826861 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826876 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826891 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqt8\" (UniqueName: \"kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-encryption-config\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wszq6\" (UniqueName: \"kubernetes.io/projected/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-kube-api-access-wszq6\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.826955 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827125 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d493bfd2-5953-4b97-a9e2-fb57cec31521-config\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827194 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-serving-cert\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827230 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-trusted-ca\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827315 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827378 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-images\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827400 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-config\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827548 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcbv\" (UniqueName: \"kubernetes.io/projected/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-kube-api-access-jqcbv\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827572 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-config\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827614 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5227068b-89fa-4779-b1e5-4f3fab7814e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827640 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0eb230b-052d-4248-a827-a4b9a58281e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827819 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb5xg\" (UniqueName: \"kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827845 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d52c6b-8245-4244-af56-394714287a4f-serving-cert\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827866 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827907 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.827921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.828035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d493bfd2-5953-4b97-a9e2-fb57cec31521-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.828065 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.828186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.828267 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzp46\" (UniqueName: \"kubernetes.io/projected/e7d52c6b-8245-4244-af56-394714287a4f-kube-api-access-lzp46\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.828299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.829483 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.836781 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.837071 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.839904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.840482 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.840919 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.841699 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.843379 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.846275 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.861930 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.866260 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.869253 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879431 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879444 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879608 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879827 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879832 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879966 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.879974 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.880248 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.880525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.880790 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.881165 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.881323 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.881496 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.882766 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.883669 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.884184 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.884678 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.884896 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885021 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885319 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885353 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885485 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885614 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885629 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.885795 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886355 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886645 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886735 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886837 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.886934 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.887018 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.887103 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.887576 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.888038 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.888424 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.889105 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.889900 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.889976 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.890249 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.890350 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.890517 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.890365 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.891132 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.891346 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.892058 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.895183 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.895754 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.899204 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.902785 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.924880 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.924990 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.926865 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.927245 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.927644 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928048 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928157 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928190 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928260 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928445 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928162 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5b89\" (UniqueName: \"kubernetes.io/projected/2e66c18c-dc70-48a8-bb7d-136c2c567966-kube-api-access-t5b89\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc86c\" (UniqueName: \"kubernetes.io/projected/5227068b-89fa-4779-b1e5-4f3fab7814e9-kube-api-access-jc86c\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-policies\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-auth-proxy-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e66c18c-dc70-48a8-bb7d-136c2c567966-serving-cert\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928872 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jrt5\" (UniqueName: \"kubernetes.io/projected/d493bfd2-5953-4b97-a9e2-fb57cec31521-kube-api-access-9jrt5\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928902 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-config\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928922 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-dir\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928941 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928974 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.928991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-machine-approver-tls\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929031 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqt8\" (UniqueName: \"kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929077 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-encryption-config\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wszq6\" (UniqueName: \"kubernetes.io/projected/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-kube-api-access-wszq6\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/73b52c55-232d-40df-bd6c-2cee7681deeb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929163 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d493bfd2-5953-4b97-a9e2-fb57cec31521-config\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929165 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929211 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-serving-cert\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-trusted-ca\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929281 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-images\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-config\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929385 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b52c55-232d-40df-bd6c-2cee7681deeb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjrx\" (UniqueName: \"kubernetes.io/projected/73b52c55-232d-40df-bd6c-2cee7681deeb-kube-api-access-cxjrx\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929439 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqcbv\" (UniqueName: \"kubernetes.io/projected/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-kube-api-access-jqcbv\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929461 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-config\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5227068b-89fa-4779-b1e5-4f3fab7814e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0eb230b-052d-4248-a827-a4b9a58281e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d52c6b-8245-4244-af56-394714287a4f-serving-cert\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929577 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb5xg\" (UniqueName: \"kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929611 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d493bfd2-5953-4b97-a9e2-fb57cec31521-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929654 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sn7j7"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzp46\" (UniqueName: \"kubernetes.io/projected/e7d52c6b-8245-4244-af56-394714287a4f-kube-api-access-lzp46\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929679 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-client\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxgf\" (UniqueName: \"kubernetes.io/projected/d0eb230b-052d-4248-a827-a4b9a58281e3-kube-api-access-fxxgf\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.929763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.930067 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvcqb"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.930508 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.930955 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.931058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-policies\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.931342 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.931428 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-auth-proxy-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.931646 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzhnr"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.931973 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932197 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932629 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-config\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932642 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932660 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.932921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-images\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.936923 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.939404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e66c18c-dc70-48a8-bb7d-136c2c567966-serving-cert\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.939870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-config\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.941236 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.941548 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5227068b-89fa-4779-b1e5-4f3fab7814e9-config\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.941617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-audit-dir\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.942162 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.949498 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d493bfd2-5953-4b97-a9e2-fb57cec31521-config\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.949818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.949960 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-encryption-config\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.950774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.967541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-trusted-ca\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.961934 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.970826 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.973655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.974182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.974608 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.974827 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-serving-cert\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.975930 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.978312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.979095 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e66c18c-dc70-48a8-bb7d-136c2c567966-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.980402 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.981490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5227068b-89fa-4779-b1e5-4f3fab7814e9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.964229 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwsk"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.981751 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.981764 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.983532 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.983681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.984318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-machine-approver-tls\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.984791 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.984952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0eb230b-052d-4248-a827-a4b9a58281e3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.984989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.985215 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-etcd-client\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.985802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7d52c6b-8245-4244-af56-394714287a4f-config\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.989692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.990551 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.990786 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xr6mc"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.990827 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jpjx8"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.995032 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998257 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998396 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998449 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8"] Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b67ca74acbaf3ce092433c1d89fda3f47bbb92bea4aea9156affb92ea9245ebc"} Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b91dd847ceefb470539d0372f61f6bc3c99c008ace3f456053159cd75ac89cd0"} Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.998573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:36 crc kubenswrapper[4813]: I0129 16:31:36.999338 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.000589 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d493bfd2-5953-4b97-a9e2-fb57cec31521-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.000703 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.002470 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.004176 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gq7jc"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.006795 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.006846 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d52c6b-8245-4244-af56-394714287a4f-serving-cert\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.006962 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4zm4s"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.006997 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.007493 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.008425 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.008883 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b99c389cf3e30cf50157b04599e63a7d1261ab188a5599fe7b7cdb8a003df09c"} Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.008938 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.008958 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dd62f6e55906cce7237d4c176fc6a1e6d53338a5715cdce98c381b585edffe52"} Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.009229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.009911 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.010859 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"69507d7e515d5e802311e6c631bffbe82de1e6b83383fd924d4e7a12f1029dab"} Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.010950 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c7068e4f24c14aabaf7e1a99b96433145f2450c645aa2457b74a76c41f027e9e"} Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.011372 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.012472 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.012522 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.013939 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.016158 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.016204 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.016875 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ktp2w"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.019712 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvcqb"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.021872 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gzgqp"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.022686 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.030182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.030595 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/73b52c55-232d-40df-bd6c-2cee7681deeb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.030660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b52c55-232d-40df-bd6c-2cee7681deeb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.030679 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjrx\" (UniqueName: \"kubernetes.io/projected/73b52c55-232d-40df-bd6c-2cee7681deeb-kube-api-access-cxjrx\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.031281 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/73b52c55-232d-40df-bd6c-2cee7681deeb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.032043 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.033182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.033941 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.034556 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b52c55-232d-40df-bd6c-2cee7681deeb-serving-cert\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.034719 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.035910 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gq7jc"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.037212 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.038335 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.041426 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.041623 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8mpzd"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.042364 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.042608 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.043667 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.044784 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xxmmc"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.045398 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.046416 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.047031 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzhnr"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.047837 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.051091 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.052586 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-px5lt"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.055384 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.056525 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8mpzd"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.058080 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-glpkt"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.059132 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.059684 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-glpkt"] Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.066915 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.086547 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.106187 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.125982 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.145751 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.165953 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.189076 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.205822 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.225609 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.252979 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.266360 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.285752 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.305955 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.325195 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.367152 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.392276 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.405921 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.426235 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.445997 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.466084 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.485806 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.505663 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.526176 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.546000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.566058 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.586636 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.614729 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.627657 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.646383 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.667573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.686782 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.708197 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.747439 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.767183 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.787060 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.807051 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.826042 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.846219 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.867275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.886405 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.908482 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.925595 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.944913 4813 request.go:700] Waited for 1.015509591s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.971512 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:31:37 crc kubenswrapper[4813]: I0129 16:31:37.971809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5b89\" (UniqueName: \"kubernetes.io/projected/2e66c18c-dc70-48a8-bb7d-136c2c567966-kube-api-access-t5b89\") pod \"authentication-operator-69f744f599-gtwsk\" (UID: \"2e66c18c-dc70-48a8-bb7d-136c2c567966\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.001453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc86c\" (UniqueName: \"kubernetes.io/projected/5227068b-89fa-4779-b1e5-4f3fab7814e9-kube-api-access-jc86c\") pod \"machine-api-operator-5694c8668f-gzgqp\" (UID: \"5227068b-89fa-4779-b1e5-4f3fab7814e9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.005818 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.027165 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.046533 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.046546 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.066419 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.086704 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.107269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.131304 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.133139 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.151666 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.171644 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.187061 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.208691 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.226960 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.266164 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.272750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqt8\" (UniqueName: \"kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8\") pod \"oauth-openshift-558db77b4-2s8mj\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.290154 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.303234 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwsk"] Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.305805 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.324911 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.346354 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.361544 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gzgqp"] Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.369169 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: W0129 16:31:38.375940 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5227068b_89fa_4779_b1e5_4f3fab7814e9.slice/crio-c1dea964cc376ed64cf162010c83dd8767015f089ddec0774cbb90cad9b581d3 WatchSource:0}: Error finding container c1dea964cc376ed64cf162010c83dd8767015f089ddec0774cbb90cad9b581d3: Status 404 returned error can't find the container with id c1dea964cc376ed64cf162010c83dd8767015f089ddec0774cbb90cad9b581d3 Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.380486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.393960 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.406869 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.427062 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.451717 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.467189 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.506514 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jrt5\" (UniqueName: \"kubernetes.io/projected/d493bfd2-5953-4b97-a9e2-fb57cec31521-kube-api-access-9jrt5\") pod \"openshift-apiserver-operator-796bbdcf4f-twpsz\" (UID: \"d493bfd2-5953-4b97-a9e2-fb57cec31521\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.523958 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb5xg\" (UniqueName: \"kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg\") pod \"controller-manager-879f6c89f-xsv7r\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.543573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzp46\" (UniqueName: \"kubernetes.io/projected/e7d52c6b-8245-4244-af56-394714287a4f-kube-api-access-lzp46\") pod \"console-operator-58897d9998-4zm4s\" (UID: \"e7d52c6b-8245-4244-af56-394714287a4f\") " pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.567425 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wszq6\" (UniqueName: \"kubernetes.io/projected/0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7-kube-api-access-wszq6\") pod \"apiserver-7bbb656c7d-7vjz9\" (UID: \"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.582335 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqcbv\" (UniqueName: \"kubernetes.io/projected/b54f84ee-bc73-4837-b44a-a88a1aa81f6c-kube-api-access-jqcbv\") pod \"machine-approver-56656f9798-8zrgw\" (UID: \"b54f84ee-bc73-4837-b44a-a88a1aa81f6c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.586211 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.593723 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:31:38 crc kubenswrapper[4813]: W0129 16:31:38.604318 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c267b88_a775_4677_839f_a30c62f8be31.slice/crio-6881aea29da7abd24447037c71cae5ddf840cf89eef75a1c84dad0eed07c6172 WatchSource:0}: Error finding container 6881aea29da7abd24447037c71cae5ddf840cf89eef75a1c84dad0eed07c6172: Status 404 returned error can't find the container with id 6881aea29da7abd24447037c71cae5ddf840cf89eef75a1c84dad0eed07c6172 Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.605514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.605744 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.627622 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.630409 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.647515 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.666475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.706323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxgf\" (UniqueName: \"kubernetes.io/projected/d0eb230b-052d-4248-a827-a4b9a58281e3-kube-api-access-fxxgf\") pod \"cluster-samples-operator-665b6dd947-blzd8\" (UID: \"d0eb230b-052d-4248-a827-a4b9a58281e3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.706726 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.726755 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.747335 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.754056 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.780932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjrx\" (UniqueName: \"kubernetes.io/projected/73b52c55-232d-40df-bd6c-2cee7681deeb-kube-api-access-cxjrx\") pod \"openshift-config-operator-7777fb866f-jwdfq\" (UID: \"73b52c55-232d-40df-bd6c-2cee7681deeb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.785932 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.785990 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz"] Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.788170 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" Jan 29 16:31:38 crc kubenswrapper[4813]: W0129 16:31:38.796134 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd493bfd2_5953_4b97_a9e2_fb57cec31521.slice/crio-58918bd319c3be8dbe5e7fdd18a7b9907d32dbca2b713df13eba9664295a43d5 WatchSource:0}: Error finding container 58918bd319c3be8dbe5e7fdd18a7b9907d32dbca2b713df13eba9664295a43d5: Status 404 returned error can't find the container with id 58918bd319c3be8dbe5e7fdd18a7b9907d32dbca2b713df13eba9664295a43d5 Jan 29 16:31:38 crc kubenswrapper[4813]: W0129 16:31:38.808236 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54f84ee_bc73_4837_b44a_a88a1aa81f6c.slice/crio-0fa22a6afc052be064bcce8f56c029b94b302703f32e78ac4afd1614d125f031 WatchSource:0}: Error finding container 0fa22a6afc052be064bcce8f56c029b94b302703f32e78ac4afd1614d125f031: Status 404 returned error can't find the container with id 0fa22a6afc052be064bcce8f56c029b94b302703f32e78ac4afd1614d125f031 Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.808712 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.821912 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.825796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.826003 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.830959 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.850959 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.859804 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:38 crc kubenswrapper[4813]: W0129 16:31:38.864386 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284b2466_e05a_45dc_af3a_2f36a1409b95.slice/crio-11ec15a1c052b2a11602cc4f044b59b9106d0b70c311a5d4a73f407fb13d99a0 WatchSource:0}: Error finding container 11ec15a1c052b2a11602cc4f044b59b9106d0b70c311a5d4a73f407fb13d99a0: Status 404 returned error can't find the container with id 11ec15a1c052b2a11602cc4f044b59b9106d0b70c311a5d4a73f407fb13d99a0 Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.866833 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.885691 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.906831 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.927078 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.946962 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.965037 4813 request.go:700] Waited for 1.905448637s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 29 16:31:38 crc kubenswrapper[4813]: I0129 16:31:38.967017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.034457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" event={"ID":"284b2466-e05a-45dc-af3a-2f36a1409b95","Type":"ContainerStarted","Data":"11ec15a1c052b2a11602cc4f044b59b9106d0b70c311a5d4a73f407fb13d99a0"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.037603 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" event={"ID":"9c267b88-a775-4677-839f-a30c62f8be31","Type":"ContainerStarted","Data":"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.037659 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" event={"ID":"9c267b88-a775-4677-839f-a30c62f8be31","Type":"ContainerStarted","Data":"6881aea29da7abd24447037c71cae5ddf840cf89eef75a1c84dad0eed07c6172"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.038125 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.042696 4813 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2s8mj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.042751 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" podUID="9c267b88-a775-4677-839f-a30c62f8be31" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.044352 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" event={"ID":"5227068b-89fa-4779-b1e5-4f3fab7814e9","Type":"ContainerStarted","Data":"9760bb339c0fecbb36c54affb97751d19f4553733a8e7756b06a006d615cb37b"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.044390 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" event={"ID":"5227068b-89fa-4779-b1e5-4f3fab7814e9","Type":"ContainerStarted","Data":"466d7cf0df1f1ffa3b2e9befee5c6358bd576012ec51be564c8f0e6f081b0a4f"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.044408 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" event={"ID":"5227068b-89fa-4779-b1e5-4f3fab7814e9","Type":"ContainerStarted","Data":"c1dea964cc376ed64cf162010c83dd8767015f089ddec0774cbb90cad9b581d3"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.047025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" event={"ID":"2e66c18c-dc70-48a8-bb7d-136c2c567966","Type":"ContainerStarted","Data":"0e6dbdbf77c01e36aa28188aaffd4ea0daa12a1fb57b326fa7f30b69a4a4361f"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.047057 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" event={"ID":"2e66c18c-dc70-48a8-bb7d-136c2c567966","Type":"ContainerStarted","Data":"7623c77dbd5a57fc149457b9501fe854f305e7f82a8960ed8094c894a2ed2bd9"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.049818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" event={"ID":"d493bfd2-5953-4b97-a9e2-fb57cec31521","Type":"ContainerStarted","Data":"b5c5372452d7d5803439bd611c75ad5b318957250f64e7e9a9556e556ce41566"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.051498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" event={"ID":"d493bfd2-5953-4b97-a9e2-fb57cec31521","Type":"ContainerStarted","Data":"58918bd319c3be8dbe5e7fdd18a7b9907d32dbca2b713df13eba9664295a43d5"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.061214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" event={"ID":"b54f84ee-bc73-4837-b44a-a88a1aa81f6c","Type":"ContainerStarted","Data":"0fa22a6afc052be064bcce8f56c029b94b302703f32e78ac4afd1614d125f031"} Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.062247 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-client\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.062879 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87kb\" (UniqueName: \"kubernetes.io/projected/90236b09-6475-4964-a2bf-ad6835024f83-kube-api-access-r87kb\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.062947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-service-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063182 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063213 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbldc\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-kube-api-access-fbldc\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063308 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063331 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-client\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmwm\" (UniqueName: \"kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063475 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063502 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddpw\" (UniqueName: \"kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90236b09-6475-4964-a2bf-ad6835024f83-metrics-tls\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgprj\" (UniqueName: \"kubernetes.io/projected/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-kube-api-access-sgprj\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-images\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063798 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ff5836-7af1-4527-9b8c-77d02e8e2986-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-config\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.063953 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-node-pullsecrets\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b070473c-2a5e-4df6-8a05-4635a4c0262a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv5cb\" (UniqueName: \"kubernetes.io/projected/ef94c4ec-2043-4d8f-ab0c-ac2458a44c82-kube-api-access-xv5cb\") pod \"downloads-7954f5f757-xr6mc\" (UID: \"ef94c4ec-2043-4d8f-ab0c-ac2458a44c82\") " pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62ln\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064471 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-image-import-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-serving-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.064701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-srv-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068334 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhr4z\" (UniqueName: \"kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068417 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw85q\" (UniqueName: \"kubernetes.io/projected/6cb1eff8-1177-4967-805e-bc2fc9cbea95-kube-api-access-tw85q\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068575 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4llb\" (UniqueName: \"kubernetes.io/projected/39a766be-6117-42a5-9635-d01fe1fbb58e-kube-api-access-x4llb\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068609 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068640 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-serving-cert\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068703 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068775 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqctl\" (UniqueName: \"kubernetes.io/projected/43d11378-44c7-4404-88ba-0f30941d6f46-kube-api-access-vqctl\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068822 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-audit-dir\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068843 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-serving-cert\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.068907 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069030 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39a766be-6117-42a5-9635-d01fe1fbb58e-proxy-tls\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9nx8\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-kube-api-access-n9nx8\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43d11378-44c7-4404-88ba-0f30941d6f46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069873 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ff5836-7af1-4527-9b8c-77d02e8e2986-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069902 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.069960 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54dd6b37-e552-4102-a2cb-3936083eb6c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070319 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070342 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-audit\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070406 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b070473c-2a5e-4df6-8a05-4635a4c0262a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070457 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lct88\" (UniqueName: \"kubernetes.io/projected/d8ff5836-7af1-4527-9b8c-77d02e8e2986-kube-api-access-lct88\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070482 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5bz\" (UniqueName: \"kubernetes.io/projected/7729409c-8459-492d-ac6c-f156327c6e2e-kube-api-access-fk5bz\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070500 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-config\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070524 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54dd6b37-e552-4102-a2cb-3936083eb6c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070576 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070594 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d11378-44c7-4404-88ba-0f30941d6f46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvzf\" (UniqueName: \"kubernetes.io/projected/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-kube-api-access-npvzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.070683 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-encryption-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.075403 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.575349981 +0000 UTC m=+152.062553197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.089503 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8"] Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.133333 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4zm4s"] Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.154879 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9"] Jan 29 16:31:39 crc kubenswrapper[4813]: W0129 16:31:39.161834 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b4f35f5_f9dd_49f0_bbde_5c85a8363eb7.slice/crio-a9bd7938b0707e3877017d35457f7383aad98e45f824c2aafa8541486b8bb4d1 WatchSource:0}: Error finding container a9bd7938b0707e3877017d35457f7383aad98e45f824c2aafa8541486b8bb4d1: Status 404 returned error can't find the container with id a9bd7938b0707e3877017d35457f7383aad98e45f824c2aafa8541486b8bb4d1 Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.162509 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq"] Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172076 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.172250 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.672214272 +0000 UTC m=+152.159417488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172453 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172518 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgprj\" (UniqueName: \"kubernetes.io/projected/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-kube-api-access-sgprj\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172544 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-images\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ff5836-7af1-4527-9b8c-77d02e8e2986-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172633 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-config\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172673 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-srv-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172708 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-node-pullsecrets\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172729 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b070473c-2a5e-4df6-8a05-4635a4c0262a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172773 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv5cb\" (UniqueName: \"kubernetes.io/projected/ef94c4ec-2043-4d8f-ab0c-ac2458a44c82-kube-api-access-xv5cb\") pod \"downloads-7954f5f757-xr6mc\" (UID: \"ef94c4ec-2043-4d8f-ab0c-ac2458a44c82\") " pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.172796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-stats-auth\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.173498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w62ln\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.173556 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.173581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-plugins-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.174019 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-node-pullsecrets\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.174584 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-images\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.174813 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-image-import-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.174903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-serving-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.174937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-srv-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175172 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhr4z\" (UniqueName: \"kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175237 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw85q\" (UniqueName: \"kubernetes.io/projected/6cb1eff8-1177-4967-805e-bc2fc9cbea95-kube-api-access-tw85q\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175261 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175318 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175344 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4llb\" (UniqueName: \"kubernetes.io/projected/39a766be-6117-42a5-9635-d01fe1fbb58e-kube-api-access-x4llb\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175364 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-proxy-tls\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbd5z\" (UniqueName: \"kubernetes.io/projected/8bc8593c-b449-4611-b280-016930c5b2b8-kube-api-access-bbd5z\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bc8593c-b449-4611-b280-016930c5b2b8-cert\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175570 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175592 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-serving-cert\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-mountpoint-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175696 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175735 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-serving-cert\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqctl\" (UniqueName: \"kubernetes.io/projected/43d11378-44c7-4404-88ba-0f30941d6f46-kube-api-access-vqctl\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-audit-dir\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175850 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aee0ca-bbdb-4db5-b1be-8203841a4436-tmpfs\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-registration-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-image-import-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.175976 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14576512-2790-478f-b22a-441dc5340af6-config-volume\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176437 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176498 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-serving-cert\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-node-bootstrap-token\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-certs\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176640 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkdh5\" (UniqueName: \"kubernetes.io/projected/a35ad584-e384-463f-b679-485f7c31cd20-kube-api-access-dkdh5\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176776 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39a766be-6117-42a5-9635-d01fe1fbb58e-proxy-tls\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-metrics-certs\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffsvw\" (UniqueName: \"kubernetes.io/projected/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-kube-api-access-ffsvw\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.176882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9nx8\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-kube-api-access-n9nx8\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.177760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39a766be-6117-42a5-9635-d01fe1fbb58e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.178962 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.179007 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7729409c-8459-492d-ac6c-f156327c6e2e-audit-dir\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.179509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.180325 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.180913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-serving-ca\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.181935 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8ff5836-7af1-4527-9b8c-77d02e8e2986-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.181996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b070473c-2a5e-4df6-8a05-4635a4c0262a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.182163 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-serving-cert\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.182089 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.182555 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/647dd0b4-1274-4240-aeaa-2506e561ebba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.182684 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43d11378-44c7-4404-88ba-0f30941d6f46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.182707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.183124 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.183498 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.183664 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.184454 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-config\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.185265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.185799 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43d11378-44c7-4404-88ba-0f30941d6f46-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: W0129 16:31:39.188799 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b52c55_232d_40df_bd6c_2cee7681deeb.slice/crio-b13add175ed36282a44b74a72bf800cb7f05834e94d377ec4c7a061298a56712 WatchSource:0}: Error finding container b13add175ed36282a44b74a72bf800cb7f05834e94d377ec4c7a061298a56712: Status 404 returned error can't find the container with id b13add175ed36282a44b74a72bf800cb7f05834e94d377ec4c7a061298a56712 Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.189211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.190990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39a766be-6117-42a5-9635-d01fe1fbb58e-proxy-tls\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.191372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.191423 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6pf\" (UniqueName: \"kubernetes.io/projected/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-kube-api-access-th6pf\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.191471 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ff5836-7af1-4527-9b8c-77d02e8e2986-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.191508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-default-certificate\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlflg\" (UniqueName: \"kubernetes.io/projected/af809a7f-01b1-428e-a351-048f0d1c0d33-kube-api-access-mlflg\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57nn\" (UniqueName: \"kubernetes.io/projected/fc7adf32-69e7-4f6a-be51-a11b653b1982-kube-api-access-v57nn\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194608 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882lk\" (UniqueName: \"kubernetes.io/projected/b032c755-2648-42f4-805f-275a96ab5ef5-kube-api-access-882lk\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194645 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194646 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8ff5836-7af1-4527-9b8c-77d02e8e2986-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-config\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-config\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194740 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54dd6b37-e552-4102-a2cb-3936083eb6c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-audit\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194845 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsh25\" (UniqueName: \"kubernetes.io/projected/af0ddea2-b781-4109-a681-c4d1dd3083d9-kube-api-access-tsh25\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194881 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a35ad584-e384-463f-b679-485f7c31cd20-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194905 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b070473c-2a5e-4df6-8a05-4635a4c0262a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194927 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvm5\" (UniqueName: \"kubernetes.io/projected/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-kube-api-access-8dvm5\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk5bz\" (UniqueName: \"kubernetes.io/projected/7729409c-8459-492d-ac6c-f156327c6e2e-kube-api-access-fk5bz\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.194995 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6595166b-564e-41bc-8d72-4306ee7da59d-service-ca-bundle\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.195010 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.69499474 +0000 UTC m=+152.182197956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195046 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsgx\" (UniqueName: \"kubernetes.io/projected/6595166b-564e-41bc-8d72-4306ee7da59d-kube-api-access-tjsgx\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195086 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lct88\" (UniqueName: \"kubernetes.io/projected/d8ff5836-7af1-4527-9b8c-77d02e8e2986-kube-api-access-lct88\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195123 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-config\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195165 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54dd6b37-e552-4102-a2cb-3936083eb6c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195206 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d11378-44c7-4404-88ba-0f30941d6f46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195261 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195280 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2ct\" (UniqueName: \"kubernetes.io/projected/34aee0ca-bbdb-4db5-b1be-8203841a4436-kube-api-access-qs2ct\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195299 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvzf\" (UniqueName: \"kubernetes.io/projected/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-kube-api-access-npvzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-encryption-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-cabundle\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195360 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-webhook-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-client\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195400 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14576512-2790-478f-b22a-441dc5340af6-metrics-tls\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195421 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87kb\" (UniqueName: \"kubernetes.io/projected/90236b09-6475-4964-a2bf-ad6835024f83-kube-api-access-r87kb\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195440 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbldc\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-kube-api-access-fbldc\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195481 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647dd0b4-1274-4240-aeaa-2506e561ebba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195504 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7kj\" (UniqueName: \"kubernetes.io/projected/25d33142-291e-456d-9718-50a45d83db4a-kube-api-access-rc7kj\") pod \"migrator-59844c95c7-qpxtb\" (UID: \"25d33142-291e-456d-9718-50a45d83db4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-service-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.195560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9dt\" (UniqueName: \"kubernetes.io/projected/14576512-2790-478f-b22a-441dc5340af6-kube-api-access-dp9dt\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.196206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7729409c-8459-492d-ac6c-f156327c6e2e-audit\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.196523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.197616 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54dd6b37-e552-4102-a2cb-3936083eb6c9-trusted-ca\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.197899 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-config\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.198062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b070473c-2a5e-4df6-8a05-4635a4c0262a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.198070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.198593 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-service-ca\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.198901 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199484 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-client\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-key\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199538 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199557 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmwm\" (UniqueName: \"kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d11378-44c7-4404-88ba-0f30941d6f46-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199617 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/647dd0b4-1274-4240-aeaa-2506e561ebba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/af0ddea2-b781-4109-a681-c4d1dd3083d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199667 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshbk\" (UniqueName: \"kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-apiservice-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199723 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-socket-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-csi-data-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199763 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddpw\" (UniqueName: \"kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.199840 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-encryption-config\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.200011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.200392 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90236b09-6475-4964-a2bf-ad6835024f83-metrics-tls\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.201575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgprj\" (UniqueName: \"kubernetes.io/projected/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-kube-api-access-sgprj\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.201737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.201801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.202221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-srv-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.202513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.202599 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.202672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.202975 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-serving-cert\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.203383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.207687 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z2rnf\" (UID: \"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.207804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54dd6b37-e552-4102-a2cb-3936083eb6c9-metrics-tls\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.208010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90236b09-6475-4964-a2bf-ad6835024f83-metrics-tls\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.208445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6cb1eff8-1177-4967-805e-bc2fc9cbea95-etcd-client\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.210299 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.213852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.214313 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7729409c-8459-492d-ac6c-f156327c6e2e-etcd-client\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.225013 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv5cb\" (UniqueName: \"kubernetes.io/projected/ef94c4ec-2043-4d8f-ab0c-ac2458a44c82-kube-api-access-xv5cb\") pod \"downloads-7954f5f757-xr6mc\" (UID: \"ef94c4ec-2043-4d8f-ab0c-ac2458a44c82\") " pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.238738 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.242640 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w62ln\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.263896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhr4z\" (UniqueName: \"kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z\") pod \"marketplace-operator-79b997595-jw9lk\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.283751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9nx8\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-kube-api-access-n9nx8\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.301510 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.301639 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.801611444 +0000 UTC m=+152.288814660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a35ad584-e384-463f-b679-485f7c31cd20-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsh25\" (UniqueName: \"kubernetes.io/projected/af0ddea2-b781-4109-a681-c4d1dd3083d9-kube-api-access-tsh25\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302307 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dvm5\" (UniqueName: \"kubernetes.io/projected/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-kube-api-access-8dvm5\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302495 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6595166b-564e-41bc-8d72-4306ee7da59d-service-ca-bundle\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302915 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4llb\" (UniqueName: \"kubernetes.io/projected/39a766be-6117-42a5-9635-d01fe1fbb58e-kube-api-access-x4llb\") pod \"machine-config-operator-74547568cd-sh7x9\" (UID: \"39a766be-6117-42a5-9635-d01fe1fbb58e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6595166b-564e-41bc-8d72-4306ee7da59d-service-ca-bundle\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.302516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjsgx\" (UniqueName: \"kubernetes.io/projected/6595166b-564e-41bc-8d72-4306ee7da59d-kube-api-access-tjsgx\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2ct\" (UniqueName: \"kubernetes.io/projected/34aee0ca-bbdb-4db5-b1be-8203841a4436-kube-api-access-qs2ct\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303415 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303447 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-cabundle\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303466 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14576512-2790-478f-b22a-441dc5340af6-metrics-tls\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-webhook-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303509 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647dd0b4-1274-4240-aeaa-2506e561ebba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7kj\" (UniqueName: \"kubernetes.io/projected/25d33142-291e-456d-9718-50a45d83db4a-kube-api-access-rc7kj\") pod \"migrator-59844c95c7-qpxtb\" (UID: \"25d33142-291e-456d-9718-50a45d83db4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp9dt\" (UniqueName: \"kubernetes.io/projected/14576512-2790-478f-b22a-441dc5340af6-kube-api-access-dp9dt\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-key\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303585 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/af0ddea2-b781-4109-a681-c4d1dd3083d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303606 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/647dd0b4-1274-4240-aeaa-2506e561ebba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-csi-data-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshbk\" (UniqueName: \"kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-apiservice-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-socket-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303764 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-srv-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303799 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-stats-auth\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-plugins-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-proxy-tls\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bc8593c-b449-4611-b280-016930c5b2b8-cert\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbd5z\" (UniqueName: \"kubernetes.io/projected/8bc8593c-b449-4611-b280-016930c5b2b8-kube-api-access-bbd5z\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303916 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-mountpoint-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.303992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aee0ca-bbdb-4db5-b1be-8203841a4436-tmpfs\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-registration-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304024 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14576512-2790-478f-b22a-441dc5340af6-config-volume\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-serving-cert\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304066 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-node-bootstrap-token\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304285 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-certs\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304313 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkdh5\" (UniqueName: \"kubernetes.io/projected/a35ad584-e384-463f-b679-485f7c31cd20-kube-api-access-dkdh5\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304332 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffsvw\" (UniqueName: \"kubernetes.io/projected/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-kube-api-access-ffsvw\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304349 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-metrics-certs\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.304367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/647dd0b4-1274-4240-aeaa-2506e561ebba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305129 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-cabundle\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305150 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34aee0ca-bbdb-4db5-b1be-8203841a4436-tmpfs\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-mountpoint-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305562 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6pf\" (UniqueName: \"kubernetes.io/projected/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-kube-api-access-th6pf\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305603 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-default-certificate\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305619 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlflg\" (UniqueName: \"kubernetes.io/projected/af809a7f-01b1-428e-a351-048f0d1c0d33-kube-api-access-mlflg\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882lk\" (UniqueName: \"kubernetes.io/projected/b032c755-2648-42f4-805f-275a96ab5ef5-kube-api-access-882lk\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57nn\" (UniqueName: \"kubernetes.io/projected/fc7adf32-69e7-4f6a-be51-a11b653b1982-kube-api-access-v57nn\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305747 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305767 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-config\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.305876 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.306164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-config\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.306552 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.308483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-registration-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.309382 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-socket-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.309609 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.809591578 +0000 UTC m=+152.296794794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.311507 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-csi-data-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.310253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.310782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.311125 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.313528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-srv-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.313950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-serving-cert\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.314492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-signing-key\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.315999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-config\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.316384 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8bc8593c-b449-4611-b280-016930c5b2b8-cert\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.309640 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14576512-2790-478f-b22a-441dc5340af6-config-volume\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.316771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/af0ddea2-b781-4109-a681-c4d1dd3083d9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.317251 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-config\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.317314 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b032c755-2648-42f4-805f-275a96ab5ef5-plugins-dir\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.317779 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/647dd0b4-1274-4240-aeaa-2506e561ebba-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.317909 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-proxy-tls\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.318564 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-default-certificate\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.319611 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/647dd0b4-1274-4240-aeaa-2506e561ebba-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.320494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fc7adf32-69e7-4f6a-be51-a11b653b1982-profile-collector-cert\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.321152 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14576512-2790-478f-b22a-441dc5340af6-metrics-tls\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.321169 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-apiservice-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.321206 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-metrics-certs\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.321824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-certs\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.323444 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a35ad584-e384-463f-b679-485f7c31cd20-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.330156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/af809a7f-01b1-428e-a351-048f0d1c0d33-node-bootstrap-token\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.330906 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqctl\" (UniqueName: \"kubernetes.io/projected/43d11378-44c7-4404-88ba-0f30941d6f46-kube-api-access-vqctl\") pod \"kube-storage-version-migrator-operator-b67b599dd-fr8rp\" (UID: \"43d11378-44c7-4404-88ba-0f30941d6f46\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.332513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6595166b-564e-41bc-8d72-4306ee7da59d-stats-auth\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.339705 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34aee0ca-bbdb-4db5-b1be-8203841a4436-webhook-cert\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.352179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw85q\" (UniqueName: \"kubernetes.io/projected/6cb1eff8-1177-4967-805e-bc2fc9cbea95-kube-api-access-tw85q\") pod \"etcd-operator-b45778765-jpjx8\" (UID: \"6cb1eff8-1177-4967-805e-bc2fc9cbea95\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.376096 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b070473c-2a5e-4df6-8a05-4635a4c0262a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-89jp5\" (UID: \"b070473c-2a5e-4df6-8a05-4635a4c0262a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.402853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.413408 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.414023 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:39.914003451 +0000 UTC m=+152.401206667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.425912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lct88\" (UniqueName: \"kubernetes.io/projected/d8ff5836-7af1-4527-9b8c-77d02e8e2986-kube-api-access-lct88\") pod \"openshift-controller-manager-operator-756b6f6bc6-hkbhl\" (UID: \"d8ff5836-7af1-4527-9b8c-77d02e8e2986\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.442128 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.443009 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk5bz\" (UniqueName: \"kubernetes.io/projected/7729409c-8459-492d-ac6c-f156327c6e2e-kube-api-access-fk5bz\") pod \"apiserver-76f77b778f-px5lt\" (UID: \"7729409c-8459-492d-ac6c-f156327c6e2e\") " pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.452608 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.461719 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf"] Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.466591 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.474617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbldc\" (UniqueName: \"kubernetes.io/projected/54dd6b37-e552-4102-a2cb-3936083eb6c9-kube-api-access-fbldc\") pod \"ingress-operator-5b745b69d9-zj9k4\" (UID: \"54dd6b37-e552-4102-a2cb-3936083eb6c9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.486546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87kb\" (UniqueName: \"kubernetes.io/projected/90236b09-6475-4964-a2bf-ad6835024f83-kube-api-access-r87kb\") pod \"dns-operator-744455d44c-ktp2w\" (UID: \"90236b09-6475-4964-a2bf-ad6835024f83\") " pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: W0129 16:31:39.486848 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc96f2b8_f3a5_4c00_bdef_95c1bc1a38d4.slice/crio-284889eb80f185cd71c76add0d0be7c5efdeb311b29e5117319cb15415b57819 WatchSource:0}: Error finding container 284889eb80f185cd71c76add0d0be7c5efdeb311b29e5117319cb15415b57819: Status 404 returned error can't find the container with id 284889eb80f185cd71c76add0d0be7c5efdeb311b29e5117319cb15415b57819 Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.501143 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.502692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvzf\" (UniqueName: \"kubernetes.io/projected/7f7cd5f7-bcff-4bc3-a908-19e62cea720c-kube-api-access-npvzf\") pod \"control-plane-machine-set-operator-78cbb6b69f-f299f\" (UID: \"7f7cd5f7-bcff-4bc3-a908-19e62cea720c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.511305 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.516156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.516844 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.016812219 +0000 UTC m=+152.504015695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.522629 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.537739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c34b1b5-1dc1-4a13-8179-d7c86297c95d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8gmkz\" (UID: \"0c34b1b5-1dc1-4a13-8179-d7c86297c95d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.547558 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmwm\" (UniqueName: \"kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm\") pod \"console-f9d7485db-f9sf8\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.547730 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.557134 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.562246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.563407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.585913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddpw\" (UniqueName: \"kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw\") pod \"route-controller-manager-6576b87f9c-dr774\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.599999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.610259 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsh25\" (UniqueName: \"kubernetes.io/projected/af0ddea2-b781-4109-a681-c4d1dd3083d9-kube-api-access-tsh25\") pod \"package-server-manager-789f6589d5-7jhrs\" (UID: \"af0ddea2-b781-4109-a681-c4d1dd3083d9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.617188 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.617743 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.117718704 +0000 UTC m=+152.604921920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.628026 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dvm5\" (UniqueName: \"kubernetes.io/projected/f50ae26e-b7ce-4284-ab22-5d635d4c4fd3-kube-api-access-8dvm5\") pod \"service-ca-9c57cc56f-fzhnr\" (UID: \"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.645757 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjsgx\" (UniqueName: \"kubernetes.io/projected/6595166b-564e-41bc-8d72-4306ee7da59d-kube-api-access-tjsgx\") pod \"router-default-5444994796-sn7j7\" (UID: \"6595166b-564e-41bc-8d72-4306ee7da59d\") " pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.665369 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2ct\" (UniqueName: \"kubernetes.io/projected/34aee0ca-bbdb-4db5-b1be-8203841a4436-kube-api-access-qs2ct\") pod \"packageserver-d55dfcdfc-gz2j9\" (UID: \"34aee0ca-bbdb-4db5-b1be-8203841a4436\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.671466 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.679363 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.685480 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/647dd0b4-1274-4240-aeaa-2506e561ebba-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jldbx\" (UID: \"647dd0b4-1274-4240-aeaa-2506e561ebba\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.696628 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.713089 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jpjx8"] Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.714553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.723065 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.724351 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.224231586 +0000 UTC m=+152.711434812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.727549 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkdh5\" (UniqueName: \"kubernetes.io/projected/a35ad584-e384-463f-b679-485f7c31cd20-kube-api-access-dkdh5\") pod \"multus-admission-controller-857f4d67dd-lvcqb\" (UID: \"a35ad584-e384-463f-b679-485f7c31cd20\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.749985 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshbk\" (UniqueName: \"kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk\") pod \"collect-profiles-29495070-pkrnw\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.765183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlflg\" (UniqueName: \"kubernetes.io/projected/af809a7f-01b1-428e-a351-048f0d1c0d33-kube-api-access-mlflg\") pod \"machine-config-server-xxmmc\" (UID: \"af809a7f-01b1-428e-a351-048f0d1c0d33\") " pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.768599 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882lk\" (UniqueName: \"kubernetes.io/projected/b032c755-2648-42f4-805f-275a96ab5ef5-kube-api-access-882lk\") pod \"csi-hostpathplugin-gq7jc\" (UID: \"b032c755-2648-42f4-805f-275a96ab5ef5\") " pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.776644 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.781208 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xxmmc" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.787164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.787383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57nn\" (UniqueName: \"kubernetes.io/projected/fc7adf32-69e7-4f6a-be51-a11b653b1982-kube-api-access-v57nn\") pod \"catalog-operator-68c6474976-cgxrr\" (UID: \"fc7adf32-69e7-4f6a-be51-a11b653b1982\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.802800 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.812392 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7kj\" (UniqueName: \"kubernetes.io/projected/25d33142-291e-456d-9718-50a45d83db4a-kube-api-access-rc7kj\") pod \"migrator-59844c95c7-qpxtb\" (UID: \"25d33142-291e-456d-9718-50a45d83db4a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.822984 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp9dt\" (UniqueName: \"kubernetes.io/projected/14576512-2790-478f-b22a-441dc5340af6-kube-api-access-dp9dt\") pod \"dns-default-glpkt\" (UID: \"14576512-2790-478f-b22a-441dc5340af6\") " pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.824666 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.825164 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.325141552 +0000 UTC m=+152.812344768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.830484 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.855890 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6pf\" (UniqueName: \"kubernetes.io/projected/64abb25e-e7ff-48fd-8df1-1f8e83ce26fa-kube-api-access-th6pf\") pod \"service-ca-operator-777779d784-vrtnf\" (UID: \"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.883001 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbd5z\" (UniqueName: \"kubernetes.io/projected/8bc8593c-b449-4611-b280-016930c5b2b8-kube-api-access-bbd5z\") pod \"ingress-canary-8mpzd\" (UID: \"8bc8593c-b449-4611-b280-016930c5b2b8\") " pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.893989 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/034ec6a4-14f9-4f7c-8cae-0535a5b44f1d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vrfcs\" (UID: \"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.910329 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.913815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffsvw\" (UniqueName: \"kubernetes.io/projected/f0a00c9c-81ed-46ca-b80e-c620c7d07a39-kube-api-access-ffsvw\") pod \"machine-config-controller-84d6567774-hbs2r\" (UID: \"f0a00c9c-81ed-46ca-b80e-c620c7d07a39\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.922489 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.926157 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:39 crc kubenswrapper[4813]: E0129 16:31:39.926546 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.42653254 +0000 UTC m=+152.913735756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:39 crc kubenswrapper[4813]: I0129 16:31:39.990188 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.008419 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.024367 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.027329 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.027833 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.527812156 +0000 UTC m=+153.015015372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.030047 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.039392 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.065278 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.072377 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" event={"ID":"e7d52c6b-8245-4244-af56-394714287a4f","Type":"ContainerStarted","Data":"14730f11cfa602c8ddf65c4dc1f1bec4f6c81dfec03086664c33121c6a31cd21"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.072442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" event={"ID":"e7d52c6b-8245-4244-af56-394714287a4f","Type":"ContainerStarted","Data":"ede804cda22ce79c21bcfdb9561d6411ca1066aeb76c011b396525ea540d13f2"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.073356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8mpzd" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.073699 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.083538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" event={"ID":"6cb1eff8-1177-4967-805e-bc2fc9cbea95","Type":"ContainerStarted","Data":"ce69c4d507e35e8328d8dcc8799b3845bf024aa0532283ae22bb828c4086df68"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.096405 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-4zm4s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.096464 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" podUID="e7d52c6b-8245-4244-af56-394714287a4f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.101618 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.119576 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" event={"ID":"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4","Type":"ContainerStarted","Data":"57bc2852684bee5f8e9f23bdf388980c84aaa934982c8a903b3a2b545d0e370c"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.119626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" event={"ID":"fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4","Type":"ContainerStarted","Data":"284889eb80f185cd71c76add0d0be7c5efdeb311b29e5117319cb15415b57819"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.120053 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.128569 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.128941 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.628929147 +0000 UTC m=+153.116132363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.136887 4813 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z2rnf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.136956 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" podUID="fc96f2b8-f3a5-4c00-bdef-95c1bc1a38d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.179561 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.214550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" event={"ID":"b54f84ee-bc73-4837-b44a-a88a1aa81f6c","Type":"ContainerStarted","Data":"d4b45bdbc93889afca52c10aaf1ac6358e2c0b070587ff84379212d125df892e"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.214597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" event={"ID":"b54f84ee-bc73-4837-b44a-a88a1aa81f6c","Type":"ContainerStarted","Data":"822f98414e61f1ba4e2fac42328faf57499410b1f606382937b5430027d9a937"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.225060 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xr6mc"] Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.231658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.231820 4813 generic.go:334] "Generic (PLEG): container finished" podID="0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7" containerID="22e0f2a7e549b0154adc15e7a08ec99dddd8f98ebb99e6a447c456c4a35927fc" exitCode=0 Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.231988 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" event={"ID":"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7","Type":"ContainerDied","Data":"22e0f2a7e549b0154adc15e7a08ec99dddd8f98ebb99e6a447c456c4a35927fc"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.232030 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" event={"ID":"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7","Type":"ContainerStarted","Data":"a9bd7938b0707e3877017d35457f7383aad98e45f824c2aafa8541486b8bb4d1"} Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.232875 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.732847466 +0000 UTC m=+153.220050682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.238158 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" event={"ID":"d0eb230b-052d-4248-a827-a4b9a58281e3","Type":"ContainerStarted","Data":"dd60b98a6657b793d8b69e42d29fe2fef7f37caed974bb491b3717dc6d290066"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.238219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" event={"ID":"d0eb230b-052d-4248-a827-a4b9a58281e3","Type":"ContainerStarted","Data":"ad16667f7e3ac36235848fed2b42ea4ffdacb34b8933ca8a06c43f912a0bbc3c"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.238231 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" event={"ID":"d0eb230b-052d-4248-a827-a4b9a58281e3","Type":"ContainerStarted","Data":"c842f18c028835fca909b81f57dae29320220748d88f20862e0321140d285f47"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.246614 4813 generic.go:334] "Generic (PLEG): container finished" podID="73b52c55-232d-40df-bd6c-2cee7681deeb" containerID="39bef283b92cf7ccb721d3b02f4c7a4161217e59c4b18fdf7f85825362299283" exitCode=0 Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.290513 4813 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xsv7r container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.290641 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.333008 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.334102 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.83408571 +0000 UTC m=+153.321288926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336738 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336774 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" event={"ID":"73b52c55-232d-40df-bd6c-2cee7681deeb","Type":"ContainerDied","Data":"39bef283b92cf7ccb721d3b02f4c7a4161217e59c4b18fdf7f85825362299283"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" event={"ID":"73b52c55-232d-40df-bd6c-2cee7681deeb","Type":"ContainerStarted","Data":"b13add175ed36282a44b74a72bf800cb7f05834e94d377ec4c7a061298a56712"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336804 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp"] Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336828 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" event={"ID":"284b2466-e05a-45dc-af3a-2f36a1409b95","Type":"ContainerStarted","Data":"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336850 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sn7j7" event={"ID":"6595166b-564e-41bc-8d72-4306ee7da59d","Type":"ContainerStarted","Data":"5bbd3d05016430fc84ba3a128a94f3e52f13297f7b3fabe2072b03ea7c3a524c"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xxmmc" event={"ID":"af809a7f-01b1-428e-a351-048f0d1c0d33","Type":"ContainerStarted","Data":"1275027d0b34c80dd9fb69b340f6207aebb921a7e1def2a016dae8819683a24f"} Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.336875 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.375535 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.435363 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.435940 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.93588231 +0000 UTC m=+153.423085526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.436561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.440782 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:40.940755497 +0000 UTC m=+153.427958713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: W0129 16:31:40.503277 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef94c4ec_2043_4d8f_ab0c_ac2458a44c82.slice/crio-cbf224b7c5c5484f42cb6373fe62d26fe3dc0f63f4f00b5a954e376c2dd96dde WatchSource:0}: Error finding container cbf224b7c5c5484f42cb6373fe62d26fe3dc0f63f4f00b5a954e376c2dd96dde: Status 404 returned error can't find the container with id cbf224b7c5c5484f42cb6373fe62d26fe3dc0f63f4f00b5a954e376c2dd96dde Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.539999 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.541434 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.041408455 +0000 UTC m=+153.528611671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.641768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.642713 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.142697601 +0000 UTC m=+153.629900817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.671151 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.674486 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.728478 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.728549 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.743937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.744339 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.244319186 +0000 UTC m=+153.731522402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.845821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.846260 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.346245469 +0000 UTC m=+153.833448685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:40 crc kubenswrapper[4813]: I0129 16:31:40.946678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:40 crc kubenswrapper[4813]: E0129 16:31:40.947378 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.44736121 +0000 UTC m=+153.934564416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.049027 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.049534 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.54951627 +0000 UTC m=+154.036719486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.150005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.151075 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.651030123 +0000 UTC m=+154.138233339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.252270 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.252809 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.752788141 +0000 UTC m=+154.239991367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.302232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" event={"ID":"43d11378-44c7-4404-88ba-0f30941d6f46","Type":"ContainerStarted","Data":"6faa5fafa68d4ceb48fc46eb4841f971ea556d657d1c386545cf2eab8652b271"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.302311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" event={"ID":"43d11378-44c7-4404-88ba-0f30941d6f46","Type":"ContainerStarted","Data":"98b045c2a2715d5d747916c956268d86e250d6ed7c5fb4fb7d5626a588cd3132"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.314816 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" event={"ID":"73b52c55-232d-40df-bd6c-2cee7681deeb","Type":"ContainerStarted","Data":"e67d16ace2e828e892c544d858922b8b9aa77998fccad0cecd1c4c2f4dfac0ce"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.314921 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.322942 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xr6mc" event={"ID":"ef94c4ec-2043-4d8f-ab0c-ac2458a44c82","Type":"ContainerStarted","Data":"34a2e4b1bf76ab189b29617b367cb26f6fde2fb5a954be4f24fc60d917630ade"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.322990 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xr6mc" event={"ID":"ef94c4ec-2043-4d8f-ab0c-ac2458a44c82","Type":"ContainerStarted","Data":"cbf224b7c5c5484f42cb6373fe62d26fe3dc0f63f4f00b5a954e376c2dd96dde"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.323832 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.325333 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gzgqp" podStartSLOduration=132.325302322 podStartE2EDuration="2m12.325302322s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.324609562 +0000 UTC m=+153.811812798" watchObservedRunningTime="2026-01-29 16:31:41.325302322 +0000 UTC m=+153.812505538" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.331380 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" event={"ID":"6cb1eff8-1177-4967-805e-bc2fc9cbea95","Type":"ContainerStarted","Data":"ea341a3b49af67754ca2d72f955d9a5eca880893bdc3c0af1a8e199d043d3671"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.331742 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-xr6mc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.331808 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xr6mc" podUID="ef94c4ec-2043-4d8f-ab0c-ac2458a44c82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.336812 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" event={"ID":"0b4f35f5-f9dd-49f0-bbde-5c85a8363eb7","Type":"ContainerStarted","Data":"04e6ac39ec552daf3efe30ddaad004f6b90522275219fbb8b4c3b171702376bb"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.342753 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sn7j7" event={"ID":"6595166b-564e-41bc-8d72-4306ee7da59d","Type":"ContainerStarted","Data":"ed952310c1d7c672428e46fabfe919dfc13e4d611581b869987047969d89a733"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.347730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" event={"ID":"84994fa0-3f61-4bca-b679-6bc0a4cb1558","Type":"ContainerStarted","Data":"b4e67949b6efb1b933910bfe0fc3d3677424bececd633cd9fe7353a9019ea0d0"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.347792 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" event={"ID":"84994fa0-3f61-4bca-b679-6bc0a4cb1558","Type":"ContainerStarted","Data":"3ab6e5be3b0dbfaeed85cfab0a3861dc411f86f0ec2696593efecd9ec520642c"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.348181 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.352311 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jw9lk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.352397 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.353005 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xxmmc" event={"ID":"af809a7f-01b1-428e-a351-048f0d1c0d33","Type":"ContainerStarted","Data":"e54f4cb5f9be1b8a9069b5e2904d53dcec917022f16695ec2f9096fe9306fb84"} Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.353059 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.353193 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.853169222 +0000 UTC m=+154.340372438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.353646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.354075 4813 patch_prober.go:28] interesting pod/console-operator-58897d9998-4zm4s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.354145 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" podUID="e7d52c6b-8245-4244-af56-394714287a4f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.354191 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.85417019 +0000 UTC m=+154.341373406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.375172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.380605 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.405240 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-twpsz" podStartSLOduration=133.405204099 podStartE2EDuration="2m13.405204099s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.400518107 +0000 UTC m=+153.887721323" watchObservedRunningTime="2026-01-29 16:31:41.405204099 +0000 UTC m=+153.892407315" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.438507 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwsk" podStartSLOduration=133.43847368 podStartE2EDuration="2m13.43847368s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.43847421 +0000 UTC m=+153.925677436" watchObservedRunningTime="2026-01-29 16:31:41.43847368 +0000 UTC m=+153.925676896" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.454891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.457034 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:41.957008729 +0000 UTC m=+154.444211945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.482389 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" podStartSLOduration=133.482365109 podStartE2EDuration="2m13.482365109s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.481726681 +0000 UTC m=+153.968929897" watchObservedRunningTime="2026-01-29 16:31:41.482365109 +0000 UTC m=+153.969568325" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.558651 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.559032 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.059019015 +0000 UTC m=+154.546222231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.643239 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xxmmc" podStartSLOduration=5.643216402 podStartE2EDuration="5.643216402s" podCreationTimestamp="2026-01-29 16:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.640661741 +0000 UTC m=+154.127864957" watchObservedRunningTime="2026-01-29 16:31:41.643216402 +0000 UTC m=+154.130419618" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.660074 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.660409 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.160392753 +0000 UTC m=+154.647595969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.668916 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jpjx8" podStartSLOduration=133.668895231 podStartE2EDuration="2m13.668895231s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.667747969 +0000 UTC m=+154.154951185" watchObservedRunningTime="2026-01-29 16:31:41.668895231 +0000 UTC m=+154.156098447" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.688991 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:41 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:41 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:41 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.689055 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.725848 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" podStartSLOduration=133.725827845 podStartE2EDuration="2m13.725827845s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.703008676 +0000 UTC m=+154.190211892" watchObservedRunningTime="2026-01-29 16:31:41.725827845 +0000 UTC m=+154.213031061" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.726305 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f"] Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.747930 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sn7j7" podStartSLOduration=133.747913894 podStartE2EDuration="2m13.747913894s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.74670919 +0000 UTC m=+154.233912406" watchObservedRunningTime="2026-01-29 16:31:41.747913894 +0000 UTC m=+154.235117110" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.762032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.762549 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.262530153 +0000 UTC m=+154.749733369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.780578 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z2rnf" podStartSLOduration=132.780557407 podStartE2EDuration="2m12.780557407s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.778028677 +0000 UTC m=+154.265231913" watchObservedRunningTime="2026-01-29 16:31:41.780557407 +0000 UTC m=+154.267760643" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.812136 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" podStartSLOduration=132.81209403 podStartE2EDuration="2m12.81209403s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.808378086 +0000 UTC m=+154.295581302" watchObservedRunningTime="2026-01-29 16:31:41.81209403 +0000 UTC m=+154.299297246" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.862640 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.863058 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.363041987 +0000 UTC m=+154.850245203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.872377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl"] Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.921440 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-px5lt"] Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.954857 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" podStartSLOduration=133.954828636 podStartE2EDuration="2m13.954828636s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:41.949282821 +0000 UTC m=+154.436486037" watchObservedRunningTime="2026-01-29 16:31:41.954828636 +0000 UTC m=+154.442031852" Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.964083 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:41 crc kubenswrapper[4813]: E0129 16:31:41.964437 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.464425295 +0000 UTC m=+154.951628511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:41 crc kubenswrapper[4813]: I0129 16:31:41.976427 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.023769 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" podStartSLOduration=134.023744966 podStartE2EDuration="2m14.023744966s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.000787503 +0000 UTC m=+154.487990729" watchObservedRunningTime="2026-01-29 16:31:42.023744966 +0000 UTC m=+154.510948182" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.025634 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.041659 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8zrgw" podStartSLOduration=134.041634797 podStartE2EDuration="2m14.041634797s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.033412467 +0000 UTC m=+154.520615683" watchObservedRunningTime="2026-01-29 16:31:42.041634797 +0000 UTC m=+154.528838013" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.042128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.102045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.102590 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.602565513 +0000 UTC m=+155.089768729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.126271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ktp2w"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.133588 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.140043 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.160159 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fr8rp" podStartSLOduration=133.160142445 podStartE2EDuration="2m13.160142445s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.117514041 +0000 UTC m=+154.604717257" watchObservedRunningTime="2026-01-29 16:31:42.160142445 +0000 UTC m=+154.647345661" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.183408 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" podStartSLOduration=134.183394996 podStartE2EDuration="2m14.183394996s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.182553622 +0000 UTC m=+154.669756838" watchObservedRunningTime="2026-01-29 16:31:42.183394996 +0000 UTC m=+154.670598212" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.215535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.215825 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.715813313 +0000 UTC m=+155.203016529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.296896 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" podStartSLOduration=133.296875133 podStartE2EDuration="2m13.296875133s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.250494934 +0000 UTC m=+154.737698160" watchObservedRunningTime="2026-01-29 16:31:42.296875133 +0000 UTC m=+154.784078349" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.317389 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.317761 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.817746487 +0000 UTC m=+155.304949693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.324369 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xr6mc" podStartSLOduration=134.324351272 podStartE2EDuration="2m14.324351272s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.299369733 +0000 UTC m=+154.786572939" watchObservedRunningTime="2026-01-29 16:31:42.324351272 +0000 UTC m=+154.811554488" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.334035 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8mpzd"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.419374 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.420355 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:42.920336939 +0000 UTC m=+155.407540155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.422696 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" event={"ID":"90236b09-6475-4964-a2bf-ad6835024f83","Type":"ContainerStarted","Data":"8289c742abf96ef573d0ea89c37a5ad446118046faaa17e6268662a50384c7e5"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.432872 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.436397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" event={"ID":"7f7cd5f7-bcff-4bc3-a908-19e62cea720c","Type":"ContainerStarted","Data":"6675c1705a04f3ab195da89568b63e62ba5035b8ffd91acff1be044eb95692bd"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.436450 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" event={"ID":"7f7cd5f7-bcff-4bc3-a908-19e62cea720c","Type":"ContainerStarted","Data":"4e68f90b7f7a57bde5efd7d69b97f4ecb9cef8852b9efb938432bcb9e7ed4cf0"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.452562 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" event={"ID":"54dd6b37-e552-4102-a2cb-3936083eb6c9","Type":"ContainerStarted","Data":"01fccf9d43b823db378c726b85ddc7014db26dda53fca9a8f43e5e4bc8fec0d2"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.458179 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.458748 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.478098 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f299f" podStartSLOduration=133.478077116 podStartE2EDuration="2m13.478077116s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.473719764 +0000 UTC m=+154.960922990" watchObservedRunningTime="2026-01-29 16:31:42.478077116 +0000 UTC m=+154.965280332" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.473042 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.479654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" event={"ID":"7729409c-8459-492d-ac6c-f156327c6e2e","Type":"ContainerStarted","Data":"d577cc31196f8ed28f6540f6edebd96d9d599c1b5ea153793504cc427ca86c09"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.479682 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gq7jc"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.504751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" event={"ID":"420c9b8a-a626-4eb7-885e-7290574cfc30","Type":"ContainerStarted","Data":"5bb99359019f9f7642118007b5341bc12813c7117bbb6cc0629014d06a3932e0"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.507976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" event={"ID":"b070473c-2a5e-4df6-8a05-4635a4c0262a","Type":"ContainerStarted","Data":"6d5fc6a7d22cccb1312f229c6493a8c984fafa05e9512608190901b691417dfa"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.509775 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" event={"ID":"d8ff5836-7af1-4527-9b8c-77d02e8e2986","Type":"ContainerStarted","Data":"980a85b08f4b2213857b0cd9a283d8c1969a7d9985a1ade7e520d1b3bd0bca0e"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.509810 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" event={"ID":"d8ff5836-7af1-4527-9b8c-77d02e8e2986","Type":"ContainerStarted","Data":"6ca31821732a0a8b4f6fa7164605ea3a3865ef02ff4f4da50e18a11d1581cf9e"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.513464 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" event={"ID":"0c34b1b5-1dc1-4a13-8179-d7c86297c95d","Type":"ContainerStarted","Data":"f0c1ed6beb55274735c33427e63070e9f1177687b57f4ac63b9819ae1b76f4d7"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.515615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" event={"ID":"39a766be-6117-42a5-9635-d01fe1fbb58e","Type":"ContainerStarted","Data":"9d6ebdadb02c44410d489fbc4611c1a88338c8d2531ba74f29de5a5f84dfc409"} Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.521194 4813 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jw9lk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.521252 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.522287 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-xr6mc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.522350 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xr6mc" podUID="ef94c4ec-2043-4d8f-ab0c-ac2458a44c82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.522595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.523721 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.023704013 +0000 UTC m=+155.510907229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.545933 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4zm4s" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.556787 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.562947 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-lvcqb"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.571771 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.588972 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hkbhl" podStartSLOduration=134.58895351 podStartE2EDuration="2m14.58895351s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:42.557739816 +0000 UTC m=+155.044943032" watchObservedRunningTime="2026-01-29 16:31:42.58895351 +0000 UTC m=+155.076156726" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.590501 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.615449 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzhnr"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.624534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.631553 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-glpkt"] Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.638753 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.138737184 +0000 UTC m=+155.625940400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.656758 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.664828 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx"] Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.666182 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw"] Jan 29 16:31:42 crc kubenswrapper[4813]: W0129 16:31:42.675850 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34aee0ca_bbdb_4db5_b1be_8203841a4436.slice/crio-32fd770a82a9f79c89a6789f4b34019a18e9349ef7b1979248c5b0c1cb035dad WatchSource:0}: Error finding container 32fd770a82a9f79c89a6789f4b34019a18e9349ef7b1979248c5b0c1cb035dad: Status 404 returned error can't find the container with id 32fd770a82a9f79c89a6789f4b34019a18e9349ef7b1979248c5b0c1cb035dad Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.679127 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:42 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:42 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:42 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.679175 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.732681 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.732952 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.232936491 +0000 UTC m=+155.720139707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: W0129 16:31:42.801982 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25d33142_291e_456d_9718_50a45d83db4a.slice/crio-1cccd884477a41064ef110b7256c25a411203168a4a73419b97d7d7e5d5baab1 WatchSource:0}: Error finding container 1cccd884477a41064ef110b7256c25a411203168a4a73419b97d7d7e5d5baab1: Status 404 returned error can't find the container with id 1cccd884477a41064ef110b7256c25a411203168a4a73419b97d7d7e5d5baab1 Jan 29 16:31:42 crc kubenswrapper[4813]: W0129 16:31:42.814136 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf0ddea2_b781_4109_a681_c4d1dd3083d9.slice/crio-c6e051e20061c11ba76da58dccefd8fc11e703fb13b516c0eccc6ad733d36f14 WatchSource:0}: Error finding container c6e051e20061c11ba76da58dccefd8fc11e703fb13b516c0eccc6ad733d36f14: Status 404 returned error can't find the container with id c6e051e20061c11ba76da58dccefd8fc11e703fb13b516c0eccc6ad733d36f14 Jan 29 16:31:42 crc kubenswrapper[4813]: W0129 16:31:42.820037 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb032c755_2648_42f4_805f_275a96ab5ef5.slice/crio-719f3a2c837ab78c8f3977450560916237d4875f350e1a55f7f0c74505380fb7 WatchSource:0}: Error finding container 719f3a2c837ab78c8f3977450560916237d4875f350e1a55f7f0c74505380fb7: Status 404 returned error can't find the container with id 719f3a2c837ab78c8f3977450560916237d4875f350e1a55f7f0c74505380fb7 Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.834483 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.834900 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.334880834 +0000 UTC m=+155.822084050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.941331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.941801 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.441761207 +0000 UTC m=+155.928964423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:42 crc kubenswrapper[4813]: I0129 16:31:42.941950 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:42 crc kubenswrapper[4813]: E0129 16:31:42.942618 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.44259625 +0000 UTC m=+155.929799466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.043026 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.043411 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.543391602 +0000 UTC m=+156.030594818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.146300 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.146600 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.646588281 +0000 UTC m=+156.133791487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.247396 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.247572 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.747543678 +0000 UTC m=+156.234746894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.248545 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.249096 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.749073611 +0000 UTC m=+156.236276827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.349434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.349737 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.849722538 +0000 UTC m=+156.336925754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.451768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.452314 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:43.95229536 +0000 UTC m=+156.439498586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.552713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.553091 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.053074742 +0000 UTC m=+156.540277958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.553377 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.553620 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.053612957 +0000 UTC m=+156.540816173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.577807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" event={"ID":"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3","Type":"ContainerStarted","Data":"f1fbd6e833d6b88344e9d77f4c5346f1834a51448b8e1e63e2de9bcddcdce9fc"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.577854 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" event={"ID":"f50ae26e-b7ce-4284-ab22-5d635d4c4fd3","Type":"ContainerStarted","Data":"f7e1b13bb300acff12a34db5670015ad6af4dd5b9ca8371f97b96c1d8c01905f"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.596892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" event={"ID":"420c9b8a-a626-4eb7-885e-7290574cfc30","Type":"ContainerStarted","Data":"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.597870 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.608828 4813 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dr774 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.608890 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.623337 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fzhnr" podStartSLOduration=134.623321668 podStartE2EDuration="2m14.623321668s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:43.62229382 +0000 UTC m=+156.109497056" watchObservedRunningTime="2026-01-29 16:31:43.623321668 +0000 UTC m=+156.110524884" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.654061 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.655682 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.155658694 +0000 UTC m=+156.642861910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.662994 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" podStartSLOduration=134.662971319 podStartE2EDuration="2m14.662971319s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:43.661278461 +0000 UTC m=+156.148481677" watchObservedRunningTime="2026-01-29 16:31:43.662971319 +0000 UTC m=+156.150174545" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.667965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" event={"ID":"54dd6b37-e552-4102-a2cb-3936083eb6c9","Type":"ContainerStarted","Data":"e67a1c5beeb8742220beea6836ab8507f3ee2cdf568f286d5be6987aa752824d"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.676793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" event={"ID":"f0a00c9c-81ed-46ca-b80e-c620c7d07a39","Type":"ContainerStarted","Data":"75a8e1647134c0e2eae6f89b7b1cb602065ec0786501fdaa5d12d03f96f70097"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.690515 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:43 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:43 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:43 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.690750 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.698604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" event={"ID":"af0ddea2-b781-4109-a681-c4d1dd3083d9","Type":"ContainerStarted","Data":"c6e051e20061c11ba76da58dccefd8fc11e703fb13b516c0eccc6ad733d36f14"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.702234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" event={"ID":"b37afb82-f85f-47d3-ad0c-5c7c60b74083","Type":"ContainerStarted","Data":"8bda82e6345a184efed87da1ba1e0feecff8575887e42e5e8099cba3c25e6c2a"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.718593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" event={"ID":"34aee0ca-bbdb-4db5-b1be-8203841a4436","Type":"ContainerStarted","Data":"32fd770a82a9f79c89a6789f4b34019a18e9349ef7b1979248c5b0c1cb035dad"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.735668 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8mpzd" event={"ID":"8bc8593c-b449-4611-b280-016930c5b2b8","Type":"ContainerStarted","Data":"555b61ac39bb57bcd76a91a6448b628fd23e84bf42e99e56d08d8f903e8b0e5f"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.741381 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" event={"ID":"25d33142-291e-456d-9718-50a45d83db4a","Type":"ContainerStarted","Data":"1cccd884477a41064ef110b7256c25a411203168a4a73419b97d7d7e5d5baab1"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.756626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.756705 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.757428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.757779 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.257762422 +0000 UTC m=+156.744965638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.761987 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" event={"ID":"b070473c-2a5e-4df6-8a05-4635a4c0262a","Type":"ContainerStarted","Data":"f73bc68a6c6c554f0d2e2635085ab87f5ca4d7eee9c13d160ed3b4cf777e4d30"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.769329 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8mpzd" podStartSLOduration=7.769308066 podStartE2EDuration="7.769308066s" podCreationTimestamp="2026-01-29 16:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:43.768995697 +0000 UTC m=+156.256198913" watchObservedRunningTime="2026-01-29 16:31:43.769308066 +0000 UTC m=+156.256511282" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.786102 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.786738 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" event={"ID":"fc7adf32-69e7-4f6a-be51-a11b653b1982","Type":"ContainerStarted","Data":"4caea523f03e064a9d6f660931ea1997600ed21da35a19bb064908105ef2fcc8"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.804714 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-89jp5" podStartSLOduration=135.804698956 podStartE2EDuration="2m15.804698956s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:43.804026188 +0000 UTC m=+156.291229404" watchObservedRunningTime="2026-01-29 16:31:43.804698956 +0000 UTC m=+156.291902172" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.818206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-glpkt" event={"ID":"14576512-2790-478f-b22a-441dc5340af6","Type":"ContainerStarted","Data":"7c639dbc9c309dc2c5102eac0c0bbd9839463a2a1551386fd3e0eca47c95932e"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.831876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" event={"ID":"647dd0b4-1274-4240-aeaa-2506e561ebba","Type":"ContainerStarted","Data":"f6ff97d88a448afa6dc8bfa578d10aa49cb5ff1ec758a3e07a7ab02717686263"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.833353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" event={"ID":"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d","Type":"ContainerStarted","Data":"2f1103ac5b1b9a17bcdfc63240c6a6621e194527463655adbdd279fe09616fe5"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.859122 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.859268 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.359244094 +0000 UTC m=+156.846447310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.859468 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.859960 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.359944823 +0000 UTC m=+156.847148039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.870549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" event={"ID":"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa","Type":"ContainerStarted","Data":"2fbbd04da7eab6a962335f601e9056384286f7ad303e262de34503df0383d814"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.882564 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" event={"ID":"a35ad584-e384-463f-b679-485f7c31cd20","Type":"ContainerStarted","Data":"6b6582beeecaacfa9cc871a7c916825b57ef602e7d1dc6c2df84ca718e86381a"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.897305 4813 generic.go:334] "Generic (PLEG): container finished" podID="7729409c-8459-492d-ac6c-f156327c6e2e" containerID="df6e28ea129aee005288e66a7b41bb3afbae4dbaaf7e9d9852edf89d6353ad5a" exitCode=0 Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.897410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" event={"ID":"7729409c-8459-492d-ac6c-f156327c6e2e","Type":"ContainerDied","Data":"df6e28ea129aee005288e66a7b41bb3afbae4dbaaf7e9d9852edf89d6353ad5a"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.909530 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" event={"ID":"b032c755-2648-42f4-805f-275a96ab5ef5","Type":"ContainerStarted","Data":"719f3a2c837ab78c8f3977450560916237d4875f350e1a55f7f0c74505380fb7"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.927557 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" podStartSLOduration=134.927535735 podStartE2EDuration="2m14.927535735s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:43.916685152 +0000 UTC m=+156.403888378" watchObservedRunningTime="2026-01-29 16:31:43.927535735 +0000 UTC m=+156.414738951" Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.937954 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" event={"ID":"0c34b1b5-1dc1-4a13-8179-d7c86297c95d","Type":"ContainerStarted","Data":"eafa45bb162efde3e4bf1fde4128b91313d796cb7b02ee6c36f7b701b9308b66"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.963224 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:43 crc kubenswrapper[4813]: E0129 16:31:43.964769 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.464741497 +0000 UTC m=+156.951944713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.987250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f9sf8" event={"ID":"0a897da4-3d6d-41f6-9fea-695b30bcd6f7","Type":"ContainerStarted","Data":"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686"} Jan 29 16:31:43 crc kubenswrapper[4813]: I0129 16:31:43.987400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f9sf8" event={"ID":"0a897da4-3d6d-41f6-9fea-695b30bcd6f7","Type":"ContainerStarted","Data":"75f94ad76f45869a70de3e68ad2aeea010b7e6fb00e6bbe9d49b992d6f69524c"} Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.031044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vjz9" Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.043126 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-f9sf8" podStartSLOduration=136.04307803 podStartE2EDuration="2m16.04307803s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:44.042192826 +0000 UTC m=+156.529396052" watchObservedRunningTime="2026-01-29 16:31:44.04307803 +0000 UTC m=+156.530281246" Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.044741 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8gmkz" podStartSLOduration=136.044731787 podStartE2EDuration="2m16.044731787s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:44.021710422 +0000 UTC m=+156.508913638" watchObservedRunningTime="2026-01-29 16:31:44.044731787 +0000 UTC m=+156.531935003" Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.064673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.070663 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.570643212 +0000 UTC m=+157.057846418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.165958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.166414 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.666396823 +0000 UTC m=+157.153600029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.269545 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.269939 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.769923861 +0000 UTC m=+157.257127077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.381747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.382450 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.882421441 +0000 UTC m=+157.369624657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.382694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.383303 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.883280925 +0000 UTC m=+157.370484141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.485266 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.485746 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:44.985714903 +0000 UTC m=+157.472918149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.587035 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.587899 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.087881113 +0000 UTC m=+157.575084329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.679048 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:44 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:44 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:44 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.679147 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.691597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.692045 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.192025809 +0000 UTC m=+157.679229025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.793695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.794098 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.294083216 +0000 UTC m=+157.781286432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.866167 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jwdfq" Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.894428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.894702 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.394686533 +0000 UTC m=+157.881889749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:44 crc kubenswrapper[4813]: I0129 16:31:44.997946 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:44 crc kubenswrapper[4813]: E0129 16:31:44.998346 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.498308114 +0000 UTC m=+157.985511330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.005425 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8mpzd" event={"ID":"8bc8593c-b449-4611-b280-016930c5b2b8","Type":"ContainerStarted","Data":"5810844684b0f47c4d010fea3d9af38edbb3fe36b6298923594603be183f9d75"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.012571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" event={"ID":"7729409c-8459-492d-ac6c-f156327c6e2e","Type":"ContainerStarted","Data":"fa390f40796134485d0ea2b2f57be2552163fb44ce2baa7aea0d93a323e8c1fe"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.036965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" event={"ID":"b37afb82-f85f-47d3-ad0c-5c7c60b74083","Type":"ContainerStarted","Data":"9387a32d56cda5ecea6a5147513e78dd169c89b5f381b3e987b327edc3d7f7ce"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.043050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" event={"ID":"34aee0ca-bbdb-4db5-b1be-8203841a4436","Type":"ContainerStarted","Data":"542ceeb370ca9d2271bd1c08b51658c87a6d08e892d829c9f1a00b8dd27cad10"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.043680 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.045997 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gz2j9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.046049 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" podUID="34aee0ca-bbdb-4db5-b1be-8203841a4436" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.058958 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" podStartSLOduration=105.058942702 podStartE2EDuration="1m45.058942702s" podCreationTimestamp="2026-01-29 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.058326174 +0000 UTC m=+157.545529390" watchObservedRunningTime="2026-01-29 16:31:45.058942702 +0000 UTC m=+157.546145918" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.059011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" event={"ID":"a35ad584-e384-463f-b679-485f7c31cd20","Type":"ContainerStarted","Data":"5466cb7a97beb90db5e71fcc794a6f03b5c5d3ada7f856215482bcf751051df8"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.059042 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" event={"ID":"a35ad584-e384-463f-b679-485f7c31cd20","Type":"ContainerStarted","Data":"81d3a4b8a611e193d690ad7b453b00a8f067e506118da80cd160ff86ca1f503c"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.067045 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" event={"ID":"54dd6b37-e552-4102-a2cb-3936083eb6c9","Type":"ContainerStarted","Data":"7bac41bcb5114c659981a7be256510b27cc247b618e2e3775bca839983ecce23"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.069479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" event={"ID":"fc7adf32-69e7-4f6a-be51-a11b653b1982","Type":"ContainerStarted","Data":"a719315b8fdade1ece80cb31a5384b3b31a0cc0c48c25139b0aead7e34cdcbf9"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.070067 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.071189 4813 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cgxrr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.071415 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" podUID="fc7adf32-69e7-4f6a-be51-a11b653b1982" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.097504 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" event={"ID":"39a766be-6117-42a5-9635-d01fe1fbb58e","Type":"ContainerStarted","Data":"e79c7786629a1ff6ba46abe044dda27f81f7b9448e3193dd31da8c682292227e"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.097579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" event={"ID":"39a766be-6117-42a5-9635-d01fe1fbb58e","Type":"ContainerStarted","Data":"831f8ba5675ccf363dff2f58eb8abe04749c62367dd8f84319276985b1c62c40"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.101574 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.101733 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.601703859 +0000 UTC m=+158.088907085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.102061 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.104477 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.604463646 +0000 UTC m=+158.091666862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.109518 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-lvcqb" podStartSLOduration=136.109504327 podStartE2EDuration="2m16.109504327s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.105221517 +0000 UTC m=+157.592424753" watchObservedRunningTime="2026-01-29 16:31:45.109504327 +0000 UTC m=+157.596707543" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.109930 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" podStartSLOduration=136.109925769 podStartE2EDuration="2m16.109925769s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.081807412 +0000 UTC m=+157.569010638" watchObservedRunningTime="2026-01-29 16:31:45.109925769 +0000 UTC m=+157.597128985" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.114292 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" event={"ID":"034ec6a4-14f9-4f7c-8cae-0535a5b44f1d","Type":"ContainerStarted","Data":"b80d071092c99ae8f7557a325089420270efdd1edc25c4027513a3acb2a9cc2c"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.124232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-vrtnf" event={"ID":"64abb25e-e7ff-48fd-8df1-1f8e83ce26fa","Type":"ContainerStarted","Data":"8404a2d2cfa111290d46e3e4863d01d3c831038919802924a54d77d9e66d66f7"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.129167 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" event={"ID":"f0a00c9c-81ed-46ca-b80e-c620c7d07a39","Type":"ContainerStarted","Data":"bf49bb3deac008da6c698d9c78754f9c859a42e40a97c5275b444c62b2036818"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.129219 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" event={"ID":"f0a00c9c-81ed-46ca-b80e-c620c7d07a39","Type":"ContainerStarted","Data":"f0e270eaabf0a1af37b7c5da3c0f06f5853625b9c1c165ff34c9563a8fe42a6b"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.132649 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" event={"ID":"25d33142-291e-456d-9718-50a45d83db4a","Type":"ContainerStarted","Data":"3f01d4f34e5f5fca18d823506f32cbef07e3b924d1d43978d34ff55f372b49cb"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.132694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" event={"ID":"25d33142-291e-456d-9718-50a45d83db4a","Type":"ContainerStarted","Data":"7d7ac2845df6b763a7a0f50282a5fb5e8651f05c7ecc50948c69fffe691b1fa6"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.135405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" event={"ID":"90236b09-6475-4964-a2bf-ad6835024f83","Type":"ContainerStarted","Data":"5e627993f11f600ba014dde79eef6cd22ee77be97b434fa4739afd486591462e"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.135443 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" event={"ID":"90236b09-6475-4964-a2bf-ad6835024f83","Type":"ContainerStarted","Data":"d7a804c86222a28a851ebe1d3b5c693f53e2a7515cf20fdbed1c000fd4e72ed8"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.138126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" event={"ID":"af0ddea2-b781-4109-a681-c4d1dd3083d9","Type":"ContainerStarted","Data":"6ed93dfc06a3dc6533c91ca83f5c184286b19557b750584e7e648fb2d3e4a18c"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.138152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" event={"ID":"af0ddea2-b781-4109-a681-c4d1dd3083d9","Type":"ContainerStarted","Data":"7266200ac7c8f4fb3b3bb66c0b8b2332b8e75e0dcd9e167427f0d75f82a1deb6"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.138600 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.164948 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-glpkt" event={"ID":"14576512-2790-478f-b22a-441dc5340af6","Type":"ContainerStarted","Data":"e10da21cac2c79f972d9739af9979fc337c2a0ea63aab14fef3801bdeca2df79"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.164993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-glpkt" event={"ID":"14576512-2790-478f-b22a-441dc5340af6","Type":"ContainerStarted","Data":"63d2d4f1a67b98c438e7d6d6f3c8be749676dd7a3b3a02e0ac06f35ad42dec8f"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.165106 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.169293 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zj9k4" podStartSLOduration=137.16926342 podStartE2EDuration="2m17.16926342s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.166988086 +0000 UTC m=+157.654191302" watchObservedRunningTime="2026-01-29 16:31:45.16926342 +0000 UTC m=+157.656466636" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.170600 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" event={"ID":"647dd0b4-1274-4240-aeaa-2506e561ebba","Type":"ContainerStarted","Data":"28ae66120e0f7656a4564acefbcf5c0eb41394734e2e3a61b3a4a4a1dfc8fb3f"} Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.181464 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.203104 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.204558 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.704541998 +0000 UTC m=+158.191745214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.254869 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" podStartSLOduration=136.254848276 podStartE2EDuration="2m16.254848276s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.199053934 +0000 UTC m=+157.686257150" watchObservedRunningTime="2026-01-29 16:31:45.254848276 +0000 UTC m=+157.742051492" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.296212 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ktp2w" podStartSLOduration=137.296186114 podStartE2EDuration="2m17.296186114s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.255811603 +0000 UTC m=+157.743014819" watchObservedRunningTime="2026-01-29 16:31:45.296186114 +0000 UTC m=+157.783389330" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.306617 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.312585 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.812562162 +0000 UTC m=+158.299765378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.395273 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpxtb" podStartSLOduration=136.395243057 podStartE2EDuration="2m16.395243057s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.394962639 +0000 UTC m=+157.882165855" watchObservedRunningTime="2026-01-29 16:31:45.395243057 +0000 UTC m=+157.882446273" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.395675 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hbs2r" podStartSLOduration=136.395668909 podStartE2EDuration="2m16.395668909s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.364969579 +0000 UTC m=+157.852172805" watchObservedRunningTime="2026-01-29 16:31:45.395668909 +0000 UTC m=+157.882872125" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.417717 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.418375 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:45.918356594 +0000 UTC m=+158.405559800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.437879 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" podStartSLOduration=136.43786038 podStartE2EDuration="2m16.43786038s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.43715585 +0000 UTC m=+157.924359066" watchObservedRunningTime="2026-01-29 16:31:45.43786038 +0000 UTC m=+157.925063596" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.490232 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vrfcs" podStartSLOduration=137.490206666 podStartE2EDuration="2m17.490206666s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.487665614 +0000 UTC m=+157.974868830" watchObservedRunningTime="2026-01-29 16:31:45.490206666 +0000 UTC m=+157.977409882" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.519265 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.519843 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.019830165 +0000 UTC m=+158.507033381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.599167 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-glpkt" podStartSLOduration=9.599146546 podStartE2EDuration="9.599146546s" podCreationTimestamp="2026-01-29 16:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.559559477 +0000 UTC m=+158.046762713" watchObservedRunningTime="2026-01-29 16:31:45.599146546 +0000 UTC m=+158.086349752" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.628533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.628923 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.128903809 +0000 UTC m=+158.616107025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.639588 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jldbx" podStartSLOduration=136.639567977 podStartE2EDuration="2m16.639567977s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.606021918 +0000 UTC m=+158.093225134" watchObservedRunningTime="2026-01-29 16:31:45.639567977 +0000 UTC m=+158.126771193" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.686166 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:45 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:45 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:45 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.686221 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.729792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.730180 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.230164394 +0000 UTC m=+158.717367610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.830824 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.830963 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.330946135 +0000 UTC m=+158.818149351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.831183 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.831523 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.331512271 +0000 UTC m=+158.818715487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:45 crc kubenswrapper[4813]: I0129 16:31:45.932988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:45 crc kubenswrapper[4813]: E0129 16:31:45.933501 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.433452105 +0000 UTC m=+158.920655321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.035243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.035535 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.535522233 +0000 UTC m=+159.022725449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.136371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.136676 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.636661764 +0000 UTC m=+159.123864980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.237707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.238009 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.737997391 +0000 UTC m=+159.225200607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.338988 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.339491 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.839461432 +0000 UTC m=+159.326664648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.350631 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" event={"ID":"b032c755-2648-42f4-805f-275a96ab5ef5","Type":"ContainerStarted","Data":"823c7cf6c83b181f22d5d68e3bb840f0088701f0bf14c009423fd62d42bbcc13"} Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.352743 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" event={"ID":"7729409c-8459-492d-ac6c-f156327c6e2e","Type":"ContainerStarted","Data":"741c7798d90b29b06601d38c4dafa775aa9d77d52b07ee98c4153fcdb93eb7b8"} Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.370871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cgxrr" Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.433806 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" podStartSLOduration=138.433775162 podStartE2EDuration="2m18.433775162s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:46.432330551 +0000 UTC m=+158.919533777" watchObservedRunningTime="2026-01-29 16:31:46.433775162 +0000 UTC m=+158.920978378" Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.434153 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sh7x9" podStartSLOduration=137.434148222 podStartE2EDuration="2m17.434148222s" podCreationTimestamp="2026-01-29 16:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:45.648499487 +0000 UTC m=+158.135702713" watchObservedRunningTime="2026-01-29 16:31:46.434148222 +0000 UTC m=+158.921351438" Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.440122 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.440589 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:46.940567902 +0000 UTC m=+159.427771128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.545128 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.545291 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.045260023 +0000 UTC m=+159.532463239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.545512 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.545812 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.045799628 +0000 UTC m=+159.533002844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.646426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.646625 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.14659729 +0000 UTC m=+159.633800506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.646793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.647183 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.147169626 +0000 UTC m=+159.634372842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.677295 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:46 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:46 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:46 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.677527 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.747397 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.747562 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.247529216 +0000 UTC m=+159.734732432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.747676 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.748051 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.2480415 +0000 UTC m=+159.735244796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.848443 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.848678 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.348649087 +0000 UTC m=+159.835852303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.848793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.849146 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.34913249 +0000 UTC m=+159.836335706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.949719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.949969 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.449911492 +0000 UTC m=+159.937114718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:46 crc kubenswrapper[4813]: I0129 16:31:46.950060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:46 crc kubenswrapper[4813]: E0129 16:31:46.950412 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.450394255 +0000 UTC m=+159.937597671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.017794 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.018709 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.022353 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.040016 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.051906 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.052198 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.552147594 +0000 UTC m=+160.039350810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.052279 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.052906 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.552738991 +0000 UTC m=+160.039942207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.153653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.153909 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.153967 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.154010 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djckm\" (UniqueName: \"kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.154131 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.654098618 +0000 UTC m=+160.141301834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.255269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.255347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.255400 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.255428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djckm\" (UniqueName: \"kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.255945 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.256223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.256281 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.756267989 +0000 UTC m=+160.243471205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.282339 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djckm\" (UniqueName: \"kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm\") pod \"community-operators-klj7p\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.332968 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.353924 4813 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gz2j9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.353986 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" podUID="34aee0ca-bbdb-4db5-b1be-8203841a4436" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.356286 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.356564 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.856549517 +0000 UTC m=+160.343752733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.378515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" event={"ID":"b032c755-2648-42f4-805f-275a96ab5ef5","Type":"ContainerStarted","Data":"eb4ce5c05b14017578eebf04ba206b77d3c90d6ccab2cc8bd713d4c286cedac3"} Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.379365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" event={"ID":"b032c755-2648-42f4-805f-275a96ab5ef5","Type":"ContainerStarted","Data":"def518e08297f06f2170f839679acbbb3668526734adecf3351ff2c36a21266d"} Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.413225 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.420488 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.435833 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.457824 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.459267 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:47.959252352 +0000 UTC m=+160.446455568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.532830 4813 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.561607 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.561787 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.561859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52p5k\" (UniqueName: \"kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.561893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.561996 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.061981938 +0000 UTC m=+160.549185154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.616062 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.617040 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.621823 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.622151 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.662925 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52p5k\" (UniqueName: \"kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.663540 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.663580 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.663640 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.664398 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.664726 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.164709814 +0000 UTC m=+160.651913030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.665414 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.681995 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:47 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:47 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:47 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.682058 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.692302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52p5k\" (UniqueName: \"kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k\") pod \"community-operators-jdwp2\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.699304 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:31:47 crc kubenswrapper[4813]: W0129 16:31:47.760188 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64f64480_2953_4d8a_8374_7ee9bee7f712.slice/crio-aaea0958c30323e47269e4fce3b55e175381a55f652448002afeb47b12200d5c WatchSource:0}: Error finding container aaea0958c30323e47269e4fce3b55e175381a55f652448002afeb47b12200d5c: Status 404 returned error can't find the container with id aaea0958c30323e47269e4fce3b55e175381a55f652448002afeb47b12200d5c Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.765629 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.765799 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.765832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhl2\" (UniqueName: \"kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.765893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.766003 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.26598788 +0000 UTC m=+160.753191096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.774287 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.830705 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.831791 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.841062 4813 csr.go:261] certificate signing request csr-h9cnh is approved, waiting to be issued Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.849046 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.855360 4813 csr.go:257] certificate signing request csr-h9cnh is issued Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhl2\" (UniqueName: \"kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866618 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866929 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.866955 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.867248 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.367236464 +0000 UTC m=+160.854439680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.900277 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhl2\" (UniqueName: \"kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2\") pod \"certified-operators-flcbx\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.951778 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.974229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.974877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.974924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr64c\" (UniqueName: \"kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:47 crc kubenswrapper[4813]: I0129 16:31:47.975036 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:47 crc kubenswrapper[4813]: E0129 16:31:47.975236 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.475208177 +0000 UTC m=+160.962411393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.076837 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.076890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.076917 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.076937 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr64c\" (UniqueName: \"kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.077378 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.577363457 +0000 UTC m=+161.064566673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.078314 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.078570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.099810 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.104524 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr64c\" (UniqueName: \"kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c\") pod \"certified-operators-rfg87\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.162725 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.178265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.178383 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.678361375 +0000 UTC m=+161.165564591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.178511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.178830 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.678820028 +0000 UTC m=+161.166023244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.264791 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.284866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.285512 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.785468204 +0000 UTC m=+161.272671430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.389327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.406583 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 16:31:48.906556544 +0000 UTC m=+161.393759760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cmdkm" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.415840 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flcbx" event={"ID":"d78854f3-3848-4a36-83dc-d10dd1d49d49","Type":"ContainerStarted","Data":"a9f1fc1830537818c96b9d57c0b0b3bf4c8f3859f9884e3fac36fca26a0be1ef"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.427450 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" event={"ID":"b032c755-2648-42f4-805f-275a96ab5ef5","Type":"ContainerStarted","Data":"115fbfc95f5e160380e51980d0fa0efe938e3f256d7ae9dbd0e947fd54f14424"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.436304 4813 generic.go:334] "Generic (PLEG): container finished" podID="64f64480-2953-4d8a-8374-7ee9bee7f712" containerID="59e3b7bfc45843baefb30d50f86007f0ec7d79269be3efe1606188e75588ce97" exitCode=0 Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.436455 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klj7p" event={"ID":"64f64480-2953-4d8a-8374-7ee9bee7f712","Type":"ContainerDied","Data":"59e3b7bfc45843baefb30d50f86007f0ec7d79269be3efe1606188e75588ce97"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.436479 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klj7p" event={"ID":"64f64480-2953-4d8a-8374-7ee9bee7f712","Type":"ContainerStarted","Data":"aaea0958c30323e47269e4fce3b55e175381a55f652448002afeb47b12200d5c"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.442034 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.444394 4813 generic.go:334] "Generic (PLEG): container finished" podID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" containerID="93cac358cd759caa4af333e1a82bd732094f2e6d91253a1fedf4d18ce4f57193" exitCode=0 Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.444549 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdwp2" event={"ID":"97b1fc85-bbb9-4308-905f-26f3258e1cc1","Type":"ContainerDied","Data":"93cac358cd759caa4af333e1a82bd732094f2e6d91253a1fedf4d18ce4f57193"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.444593 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdwp2" event={"ID":"97b1fc85-bbb9-4308-905f-26f3258e1cc1","Type":"ContainerStarted","Data":"c06956778438330260594cc1973f5d23214b7a3ae63ef24a75e3d1de8189a554"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.447396 4813 generic.go:334] "Generic (PLEG): container finished" podID="b37afb82-f85f-47d3-ad0c-5c7c60b74083" containerID="9387a32d56cda5ecea6a5147513e78dd169c89b5f381b3e987b327edc3d7f7ce" exitCode=0 Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.448842 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" event={"ID":"b37afb82-f85f-47d3-ad0c-5c7c60b74083","Type":"ContainerDied","Data":"9387a32d56cda5ecea6a5147513e78dd169c89b5f381b3e987b327edc3d7f7ce"} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.450712 4813 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T16:31:47.532861393Z","Handler":null,"Name":""} Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.457886 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gq7jc" podStartSLOduration=12.45786201 podStartE2EDuration="12.45786201s" podCreationTimestamp="2026-01-29 16:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:48.455784972 +0000 UTC m=+160.942988188" watchObservedRunningTime="2026-01-29 16:31:48.45786201 +0000 UTC m=+160.945065226" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.465188 4813 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.465252 4813 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.491415 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.493191 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.507272 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.594980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.599166 4813 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.599193 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.632729 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.632873 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52p5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jdwp2_openshift-marketplace(97b1fc85-bbb9-4308-905f-26f3258e1cc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.634057 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.634767 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cmdkm\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.643031 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.643201 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-klj7p_openshift-marketplace(64f64480-2953-4d8a-8374-7ee9bee7f712): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:48 crc kubenswrapper[4813]: E0129 16:31:48.644356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.677246 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:48 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:48 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:48 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.677354 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.868674 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 16:26:47 +0000 UTC, rotation deadline is 2026-11-22 04:40:40.506479191 +0000 UTC Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.869046 4813 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7116h8m51.637437889s for next certificate rotation Jan 29 16:31:48 crc kubenswrapper[4813]: I0129 16:31:48.871329 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.353634 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:31:49 crc kubenswrapper[4813]: W0129 16:31:49.365477 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90fcf277_fd30_4c95_80b6_4c5199172c6d.slice/crio-b5170c4907b295092e73e0a07e199411353d328db8761160dba5ee875933e6b8 WatchSource:0}: Error finding container b5170c4907b295092e73e0a07e199411353d328db8761160dba5ee875933e6b8: Status 404 returned error can't find the container with id b5170c4907b295092e73e0a07e199411353d328db8761160dba5ee875933e6b8 Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.395794 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.396931 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.399752 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.410698 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.453432 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-xr6mc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.453491 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xr6mc" podUID="ef94c4ec-2043-4d8f-ab0c-ac2458a44c82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.453669 4813 patch_prober.go:28] interesting pod/downloads-7954f5f757-xr6mc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.453723 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xr6mc" podUID="ef94c4ec-2043-4d8f-ab0c-ac2458a44c82" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.454973 4813 generic.go:334] "Generic (PLEG): container finished" podID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" containerID="906c446bda213dcbe91f8d599dfe0ea9e2e12678807673684d7ee564319c150e" exitCode=0 Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.455510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfg87" event={"ID":"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec","Type":"ContainerDied","Data":"906c446bda213dcbe91f8d599dfe0ea9e2e12678807673684d7ee564319c150e"} Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.455621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfg87" event={"ID":"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec","Type":"ContainerStarted","Data":"b737d0f4d035939f73a965fe491564415bec558c86df63cdce743c9494d47bb2"} Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.457599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" event={"ID":"90fcf277-fd30-4c95-80b6-4c5199172c6d","Type":"ContainerStarted","Data":"b5170c4907b295092e73e0a07e199411353d328db8761160dba5ee875933e6b8"} Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.461053 4813 generic.go:334] "Generic (PLEG): container finished" podID="d78854f3-3848-4a36-83dc-d10dd1d49d49" containerID="f4c574b1a2c3f1e62bd1f345bf105c2f1afefa48383690a57be6221feb12379e" exitCode=0 Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.462355 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flcbx" event={"ID":"d78854f3-3848-4a36-83dc-d10dd1d49d49","Type":"ContainerDied","Data":"f4c574b1a2c3f1e62bd1f345bf105c2f1afefa48383690a57be6221feb12379e"} Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.463842 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.463923 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.467488 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.469437 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.482471 4813 patch_prober.go:28] interesting pod/apiserver-76f77b778f-px5lt container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]log ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]etcd ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/max-in-flight-filter ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 16:31:49 crc kubenswrapper[4813]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 16:31:49 crc kubenswrapper[4813]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-startinformers ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 16:31:49 crc kubenswrapper[4813]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 16:31:49 crc kubenswrapper[4813]: livez check failed Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.483670 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" podUID="7729409c-8459-492d-ac6c-f156327c6e2e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.504180 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.509421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.509527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.509560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.579958 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.580130 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rfg87_openshift-marketplace(61eb0d4e-a892-4ca0-aad1-5c2fe1039fec): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.581282 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.585473 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.585609 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwhl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-flcbx_openshift-marketplace(d78854f3-3848-4a36-83dc-d10dd1d49d49): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.586874 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.610808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.610912 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.610948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.611334 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.612004 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.648360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl\") pod \"redhat-marketplace-ppft9\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.676404 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.679932 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:49 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:49 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:49 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.680036 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.722075 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gz2j9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.727702 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.741706 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.807321 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.807374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.808819 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:31:49 crc kubenswrapper[4813]: E0129 16:31:49.809142 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37afb82-f85f-47d3-ad0c-5c7c60b74083" containerName="collect-profiles" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.809163 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37afb82-f85f-47d3-ad0c-5c7c60b74083" containerName="collect-profiles" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.809285 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37afb82-f85f-47d3-ad0c-5c7c60b74083" containerName="collect-profiles" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.810215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.816393 4813 patch_prober.go:28] interesting pod/console-f9d7485db-f9sf8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.816501 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f9sf8" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.817626 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume\") pod \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.817747 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume\") pod \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.817922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wshbk\" (UniqueName: \"kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk\") pod \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\" (UID: \"b37afb82-f85f-47d3-ad0c-5c7c60b74083\") " Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.824714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume" (OuterVolumeSpecName: "config-volume") pod "b37afb82-f85f-47d3-ad0c-5c7c60b74083" (UID: "b37afb82-f85f-47d3-ad0c-5c7c60b74083"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.834872 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.837488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk" (OuterVolumeSpecName: "kube-api-access-wshbk") pod "b37afb82-f85f-47d3-ad0c-5c7c60b74083" (UID: "b37afb82-f85f-47d3-ad0c-5c7c60b74083"). InnerVolumeSpecName "kube-api-access-wshbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.843465 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.848413 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b37afb82-f85f-47d3-ad0c-5c7c60b74083" (UID: "b37afb82-f85f-47d3-ad0c-5c7c60b74083"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.851460 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.877816 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.880230 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.880605 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.919716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.919837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.919858 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp6vk\" (UniqueName: \"kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.919908 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.919934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.920025 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b37afb82-f85f-47d3-ad0c-5c7c60b74083-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.920038 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wshbk\" (UniqueName: \"kubernetes.io/projected/b37afb82-f85f-47d3-ad0c-5c7c60b74083-kube-api-access-wshbk\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:49 crc kubenswrapper[4813]: I0129 16:31:49.920054 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b37afb82-f85f-47d3-ad0c-5c7c60b74083-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp6vk\" (UniqueName: \"kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.022932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.023238 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.044798 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp6vk\" (UniqueName: \"kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk\") pod \"redhat-marketplace-zc9n4\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.046885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.120148 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.140784 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.201532 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.252839 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.401897 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.409985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.423705 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.428802 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.486746 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" event={"ID":"b37afb82-f85f-47d3-ad0c-5c7c60b74083","Type":"ContainerDied","Data":"8bda82e6345a184efed87da1ba1e0feecff8575887e42e5e8099cba3c25e6c2a"} Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.486845 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bda82e6345a184efed87da1ba1e0feecff8575887e42e5e8099cba3c25e6c2a" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.486971 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.487860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.493891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" event={"ID":"90fcf277-fd30-4c95-80b6-4c5199172c6d","Type":"ContainerStarted","Data":"8caeed8ba7b049001659a2f83973706ff5c460c6dfd4bca4675c88edd36afc97"} Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.497015 4813 generic.go:334] "Generic (PLEG): container finished" podID="91ba5590-c7f3-4892-95fb-c39fbffe7278" containerID="05c84ecf83242e95e61100c660d1f96020f53759455d3314d081ee4731f7e7c7" exitCode=0 Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.497076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppft9" event={"ID":"91ba5590-c7f3-4892-95fb-c39fbffe7278","Type":"ContainerDied","Data":"05c84ecf83242e95e61100c660d1f96020f53759455d3314d081ee4731f7e7c7"} Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.497230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppft9" event={"ID":"91ba5590-c7f3-4892-95fb-c39fbffe7278","Type":"ContainerStarted","Data":"5f9f90451dae016cb0b0fcb381d6e37599a215243657f8dcba8040c11bca74f6"} Jan 29 16:31:50 crc kubenswrapper[4813]: W0129 16:31:50.497009 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c65b4cb_fbe4_4d41_ae09_cf137cd6a4b2.slice/crio-adfb2f6cdbb310803bb0535d200040b448e90383f49b9a70ac2a1ff1f9f78e95 WatchSource:0}: Error finding container adfb2f6cdbb310803bb0535d200040b448e90383f49b9a70ac2a1ff1f9f78e95: Status 404 returned error can't find the container with id adfb2f6cdbb310803bb0535d200040b448e90383f49b9a70ac2a1ff1f9f78e95 Jan 29 16:31:50 crc kubenswrapper[4813]: E0129 16:31:50.498959 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:31:50 crc kubenswrapper[4813]: E0129 16:31:50.501195 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.521674 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" podStartSLOduration=142.51918329 podStartE2EDuration="2m22.51918329s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:50.515563289 +0000 UTC m=+163.002766545" watchObservedRunningTime="2026-01-29 16:31:50.51918329 +0000 UTC m=+163.006386506" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.529597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.529702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.529749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28276\" (UniqueName: \"kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.632063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.632251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.632290 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28276\" (UniqueName: \"kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: E0129 16:31:50.633322 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:31:50 crc kubenswrapper[4813]: E0129 16:31:50.633461 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ppft9_openshift-marketplace(91ba5590-c7f3-4892-95fb-c39fbffe7278): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:50 crc kubenswrapper[4813]: E0129 16:31:50.634569 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.636250 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.636427 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.667760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28276\" (UniqueName: \"kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276\") pod \"redhat-operators-6hb9j\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.676974 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:50 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:50 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:50 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.677044 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.720090 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 16:31:50 crc kubenswrapper[4813]: W0129 16:31:50.727265 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod574d9fb3_6d3d_48d3_b46a_dd8869095af7.slice/crio-a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e WatchSource:0}: Error finding container a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e: Status 404 returned error can't find the container with id a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.738335 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.813145 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.819336 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.830582 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.944043 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pxv6\" (UniqueName: \"kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.944103 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:50 crc kubenswrapper[4813]: I0129 16:31:50.944177 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.002917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:31:51 crc kubenswrapper[4813]: W0129 16:31:51.012863 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5f3efb2_a274_4b29_bc4b_30d924188614.slice/crio-7de9752add78416bc752cd729c8f7d8c233eea056aa3b50e95f653d76ff6e3b0 WatchSource:0}: Error finding container 7de9752add78416bc752cd729c8f7d8c233eea056aa3b50e95f653d76ff6e3b0: Status 404 returned error can't find the container with id 7de9752add78416bc752cd729c8f7d8c233eea056aa3b50e95f653d76ff6e3b0 Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.045491 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pxv6\" (UniqueName: \"kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.045592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.045671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.046416 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.046470 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.072751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pxv6\" (UniqueName: \"kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6\") pod \"redhat-operators-wd6ld\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.153610 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.248394 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.252747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e35b844b-1645-458c-b117-f60fe6042abe-metrics-certs\") pod \"network-metrics-daemon-nsttk\" (UID: \"e35b844b-1645-458c-b117-f60fe6042abe\") " pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.341869 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:31:51 crc kubenswrapper[4813]: W0129 16:31:51.349059 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1106c979_ac94_49f2_a9a9_cd044c3df80c.slice/crio-0f62a9a59bb776e5467aa88b65e25d42514d50f09cb83f45874905f0ae378087 WatchSource:0}: Error finding container 0f62a9a59bb776e5467aa88b65e25d42514d50f09cb83f45874905f0ae378087: Status 404 returned error can't find the container with id 0f62a9a59bb776e5467aa88b65e25d42514d50f09cb83f45874905f0ae378087 Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.358968 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nsttk" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.511941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"574d9fb3-6d3d-48d3-b46a-dd8869095af7","Type":"ContainerStarted","Data":"2a1a5dd3934c6da1162597b3ebcc6d43d635ea6aaba77211246acd8f776d94c4"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.512749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"574d9fb3-6d3d-48d3-b46a-dd8869095af7","Type":"ContainerStarted","Data":"a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.531515 4813 generic.go:334] "Generic (PLEG): container finished" podID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" containerID="c9fe0c1cf128be6497ce86fb75fc16d5aeae3009a6149bf8f8e9b8e3ef739d9f" exitCode=0 Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.531620 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9n4" event={"ID":"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2","Type":"ContainerDied","Data":"c9fe0c1cf128be6497ce86fb75fc16d5aeae3009a6149bf8f8e9b8e3ef739d9f"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.531681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9n4" event={"ID":"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2","Type":"ContainerStarted","Data":"adfb2f6cdbb310803bb0535d200040b448e90383f49b9a70ac2a1ff1f9f78e95"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.534323 4813 generic.go:334] "Generic (PLEG): container finished" podID="a5f3efb2-a274-4b29-bc4b-30d924188614" containerID="325cf0ad115dc23b452ac9d6f45ea6ab48bfdb80909645b8e711ae3be5c2a82d" exitCode=0 Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.534352 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.534331371 podStartE2EDuration="2.534331371s" podCreationTimestamp="2026-01-29 16:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:51.531162342 +0000 UTC m=+164.018365568" watchObservedRunningTime="2026-01-29 16:31:51.534331371 +0000 UTC m=+164.021534587" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.534835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hb9j" event={"ID":"a5f3efb2-a274-4b29-bc4b-30d924188614","Type":"ContainerDied","Data":"325cf0ad115dc23b452ac9d6f45ea6ab48bfdb80909645b8e711ae3be5c2a82d"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.534878 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hb9j" event={"ID":"a5f3efb2-a274-4b29-bc4b-30d924188614","Type":"ContainerStarted","Data":"7de9752add78416bc752cd729c8f7d8c233eea056aa3b50e95f653d76ff6e3b0"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.540201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd6ld" event={"ID":"1106c979-ac94-49f2-a9a9-cd044c3df80c","Type":"ContainerStarted","Data":"907ace76a173f3cc5cca38f2aa860eaf5fce7e7395fef5ef2892c0c1b8456ba0"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.540279 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd6ld" event={"ID":"1106c979-ac94-49f2-a9a9-cd044c3df80c","Type":"ContainerStarted","Data":"0f62a9a59bb776e5467aa88b65e25d42514d50f09cb83f45874905f0ae378087"} Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.540702 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.542680 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.570359 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nsttk"] Jan 29 16:31:51 crc kubenswrapper[4813]: W0129 16:31:51.579228 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode35b844b_1645_458c_b117_f60fe6042abe.slice/crio-7490d7a5f01952d739ad8d974444efb61add3a07a4c818043ea145a43a8678e3 WatchSource:0}: Error finding container 7490d7a5f01952d739ad8d974444efb61add3a07a4c818043ea145a43a8678e3: Status 404 returned error can't find the container with id 7490d7a5f01952d739ad8d974444efb61add3a07a4c818043ea145a43a8678e3 Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.675900 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:51 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:51 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:51 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:51 crc kubenswrapper[4813]: I0129 16:31:51.675980 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.677892 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.678058 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6hb9j_openshift-marketplace(a5f3efb2-a274-4b29-bc4b-30d924188614): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.679291 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.692695 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.692881 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zc9n4_openshift-marketplace(0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.694089 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.698161 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.698298 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wd6ld_openshift-marketplace(1106c979-ac94-49f2-a9a9-cd044c3df80c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:31:51 crc kubenswrapper[4813]: E0129 16:31:51.700369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.029295 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.030441 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.033728 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.034004 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.041704 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.157765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.157888 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.258812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.258956 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.259191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.277010 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.354582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.545439 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.546791 4813 generic.go:334] "Generic (PLEG): container finished" podID="1106c979-ac94-49f2-a9a9-cd044c3df80c" containerID="907ace76a173f3cc5cca38f2aa860eaf5fce7e7395fef5ef2892c0c1b8456ba0" exitCode=0 Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.546858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd6ld" event={"ID":"1106c979-ac94-49f2-a9a9-cd044c3df80c","Type":"ContainerDied","Data":"907ace76a173f3cc5cca38f2aa860eaf5fce7e7395fef5ef2892c0c1b8456ba0"} Jan 29 16:31:52 crc kubenswrapper[4813]: E0129 16:31:52.548212 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.549180 4813 generic.go:334] "Generic (PLEG): container finished" podID="574d9fb3-6d3d-48d3-b46a-dd8869095af7" containerID="2a1a5dd3934c6da1162597b3ebcc6d43d635ea6aaba77211246acd8f776d94c4" exitCode=0 Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.549229 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"574d9fb3-6d3d-48d3-b46a-dd8869095af7","Type":"ContainerDied","Data":"2a1a5dd3934c6da1162597b3ebcc6d43d635ea6aaba77211246acd8f776d94c4"} Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.551315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nsttk" event={"ID":"e35b844b-1645-458c-b117-f60fe6042abe","Type":"ContainerStarted","Data":"464cf8621c9d1711d13395617c9bc296d91924038272caaeac66bb1f68c3ab89"} Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.551345 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nsttk" event={"ID":"e35b844b-1645-458c-b117-f60fe6042abe","Type":"ContainerStarted","Data":"dd2b71d9196ef7b2181aa197d44d32d4bb463398f0689f421c6f9978985bf5a1"} Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.551357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nsttk" event={"ID":"e35b844b-1645-458c-b117-f60fe6042abe","Type":"ContainerStarted","Data":"7490d7a5f01952d739ad8d974444efb61add3a07a4c818043ea145a43a8678e3"} Jan 29 16:31:52 crc kubenswrapper[4813]: E0129 16:31:52.558280 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:31:52 crc kubenswrapper[4813]: E0129 16:31:52.558307 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:31:52 crc kubenswrapper[4813]: W0129 16:31:52.564139 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod764d200f_49b3_4197_a22d_00b95b74f0b3.slice/crio-ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25 WatchSource:0}: Error finding container ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25: Status 404 returned error can't find the container with id ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25 Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.632992 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nsttk" podStartSLOduration=144.63296549 podStartE2EDuration="2m24.63296549s" podCreationTimestamp="2026-01-29 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:31:52.629646637 +0000 UTC m=+165.116849863" watchObservedRunningTime="2026-01-29 16:31:52.63296549 +0000 UTC m=+165.120168706" Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.676320 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:52 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:52 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:52 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:52 crc kubenswrapper[4813]: I0129 16:31:52.676371 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.559173 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"764d200f-49b3-4197-a22d-00b95b74f0b3","Type":"ContainerStarted","Data":"ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25"} Jan 29 16:31:53 crc kubenswrapper[4813]: E0129 16:31:53.562598 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.676360 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:53 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:53 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:53 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.676424 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.783916 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.876222 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir\") pod \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.876370 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access\") pod \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\" (UID: \"574d9fb3-6d3d-48d3-b46a-dd8869095af7\") " Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.876404 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "574d9fb3-6d3d-48d3-b46a-dd8869095af7" (UID: "574d9fb3-6d3d-48d3-b46a-dd8869095af7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.876629 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.895223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "574d9fb3-6d3d-48d3-b46a-dd8869095af7" (UID: "574d9fb3-6d3d-48d3-b46a-dd8869095af7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:53 crc kubenswrapper[4813]: I0129 16:31:53.978031 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/574d9fb3-6d3d-48d3-b46a-dd8869095af7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.472586 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.478092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-px5lt" Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.565923 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"574d9fb3-6d3d-48d3-b46a-dd8869095af7","Type":"ContainerDied","Data":"a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e"} Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.566309 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a655da25aa73f63dc29ba7c7f1f772e5189b6d5762442a5c9e92c725f629535e" Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.566381 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.568614 4813 generic.go:334] "Generic (PLEG): container finished" podID="764d200f-49b3-4197-a22d-00b95b74f0b3" containerID="92bc7e138edc87fd73bbbeb79aeba73dd35a5ffe74b4012605c257998723e9b9" exitCode=0 Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.569581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"764d200f-49b3-4197-a22d-00b95b74f0b3","Type":"ContainerDied","Data":"92bc7e138edc87fd73bbbeb79aeba73dd35a5ffe74b4012605c257998723e9b9"} Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.679129 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:54 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:54 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:54 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:54 crc kubenswrapper[4813]: I0129 16:31:54.679188 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.104885 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-glpkt" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.681359 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:55 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:55 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:55 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.681422 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.778425 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.905723 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access\") pod \"764d200f-49b3-4197-a22d-00b95b74f0b3\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.905798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir\") pod \"764d200f-49b3-4197-a22d-00b95b74f0b3\" (UID: \"764d200f-49b3-4197-a22d-00b95b74f0b3\") " Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.905960 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "764d200f-49b3-4197-a22d-00b95b74f0b3" (UID: "764d200f-49b3-4197-a22d-00b95b74f0b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.906287 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/764d200f-49b3-4197-a22d-00b95b74f0b3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:55 crc kubenswrapper[4813]: I0129 16:31:55.910094 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "764d200f-49b3-4197-a22d-00b95b74f0b3" (UID: "764d200f-49b3-4197-a22d-00b95b74f0b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.007860 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764d200f-49b3-4197-a22d-00b95b74f0b3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.579416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"764d200f-49b3-4197-a22d-00b95b74f0b3","Type":"ContainerDied","Data":"ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25"} Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.579702 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded46972f429e1d148ca64b5ae996f5be7b0e04efd123054dac8a6d4420afb25" Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.579462 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.675173 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:56 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:56 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:56 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:56 crc kubenswrapper[4813]: I0129 16:31:56.675263 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:57 crc kubenswrapper[4813]: I0129 16:31:57.675029 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:57 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:57 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:57 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:57 crc kubenswrapper[4813]: I0129 16:31:57.676458 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:58 crc kubenswrapper[4813]: I0129 16:31:58.675156 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:58 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:58 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:58 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:58 crc kubenswrapper[4813]: I0129 16:31:58.675250 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:59 crc kubenswrapper[4813]: I0129 16:31:59.459850 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xr6mc" Jan 29 16:31:59 crc kubenswrapper[4813]: I0129 16:31:59.676918 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:31:59 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:31:59 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:31:59 crc kubenswrapper[4813]: healthz check failed Jan 29 16:31:59 crc kubenswrapper[4813]: I0129 16:31:59.676981 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:31:59 crc kubenswrapper[4813]: I0129 16:31:59.804812 4813 patch_prober.go:28] interesting pod/console-f9d7485db-f9sf8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 16:31:59 crc kubenswrapper[4813]: I0129 16:31:59.804863 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f9sf8" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.240131 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.240442 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.605834 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-blzd8_d0eb230b-052d-4248-a827-a4b9a58281e3/cluster-samples-operator/0.log" Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.605888 4813 generic.go:334] "Generic (PLEG): container finished" podID="d0eb230b-052d-4248-a827-a4b9a58281e3" containerID="ad16667f7e3ac36235848fed2b42ea4ffdacb34b8933ca8a06c43f912a0bbc3c" exitCode=2 Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.605918 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" event={"ID":"d0eb230b-052d-4248-a827-a4b9a58281e3","Type":"ContainerDied","Data":"ad16667f7e3ac36235848fed2b42ea4ffdacb34b8933ca8a06c43f912a0bbc3c"} Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.606371 4813 scope.go:117] "RemoveContainer" containerID="ad16667f7e3ac36235848fed2b42ea4ffdacb34b8933ca8a06c43f912a0bbc3c" Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.675524 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:32:00 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:32:00 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:32:00 crc kubenswrapper[4813]: healthz check failed Jan 29 16:32:00 crc kubenswrapper[4813]: I0129 16:32:00.675887 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:32:01 crc kubenswrapper[4813]: E0129 16:32:01.369094 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:32:01 crc kubenswrapper[4813]: E0129 16:32:01.369254 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rfg87_openshift-marketplace(61eb0d4e-a892-4ca0-aad1-5c2fe1039fec): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:01 crc kubenswrapper[4813]: E0129 16:32:01.370444 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:32:01 crc kubenswrapper[4813]: I0129 16:32:01.615021 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-blzd8_d0eb230b-052d-4248-a827-a4b9a58281e3/cluster-samples-operator/0.log" Jan 29 16:32:01 crc kubenswrapper[4813]: I0129 16:32:01.615104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-blzd8" event={"ID":"d0eb230b-052d-4248-a827-a4b9a58281e3","Type":"ContainerStarted","Data":"2b48b1d5d62a34a4fe5c6b44c62cf783c517a92d5d710ef4f3efaf4067e1da94"} Jan 29 16:32:01 crc kubenswrapper[4813]: I0129 16:32:01.676269 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:32:01 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:32:01 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:32:01 crc kubenswrapper[4813]: healthz check failed Jan 29 16:32:01 crc kubenswrapper[4813]: I0129 16:32:01.676353 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:32:02 crc kubenswrapper[4813]: I0129 16:32:02.673907 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:32:02 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:32:02 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:32:02 crc kubenswrapper[4813]: healthz check failed Jan 29 16:32:02 crc kubenswrapper[4813]: I0129 16:32:02.674751 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:32:03 crc kubenswrapper[4813]: E0129 16:32:03.362521 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:32:03 crc kubenswrapper[4813]: E0129 16:32:03.362697 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6hb9j_openshift-marketplace(a5f3efb2-a274-4b29-bc4b-30d924188614): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:03 crc kubenswrapper[4813]: E0129 16:32:03.363890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:32:03 crc kubenswrapper[4813]: I0129 16:32:03.674647 4813 patch_prober.go:28] interesting pod/router-default-5444994796-sn7j7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 16:32:03 crc kubenswrapper[4813]: [-]has-synced failed: reason withheld Jan 29 16:32:03 crc kubenswrapper[4813]: [+]process-running ok Jan 29 16:32:03 crc kubenswrapper[4813]: healthz check failed Jan 29 16:32:03 crc kubenswrapper[4813]: I0129 16:32:03.675006 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sn7j7" podUID="6595166b-564e-41bc-8d72-4306ee7da59d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.368917 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.369082 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-klj7p_openshift-marketplace(64f64480-2953-4d8a-8374-7ee9bee7f712): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.370278 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.370453 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.370556 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52p5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jdwp2_openshift-marketplace(97b1fc85-bbb9-4308-905f-26f3258e1cc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:04 crc kubenswrapper[4813]: E0129 16:32:04.371731 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:32:04 crc kubenswrapper[4813]: I0129 16:32:04.674198 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:32:04 crc kubenswrapper[4813]: I0129 16:32:04.676552 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sn7j7" Jan 29 16:32:05 crc kubenswrapper[4813]: E0129 16:32:05.365856 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:32:05 crc kubenswrapper[4813]: E0129 16:32:05.366023 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwhl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-flcbx_openshift-marketplace(d78854f3-3848-4a36-83dc-d10dd1d49d49): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:05 crc kubenswrapper[4813]: E0129 16:32:05.367283 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:32:06 crc kubenswrapper[4813]: E0129 16:32:06.362733 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:32:06 crc kubenswrapper[4813]: E0129 16:32:06.362965 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ppft9_openshift-marketplace(91ba5590-c7f3-4892-95fb-c39fbffe7278): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:06 crc kubenswrapper[4813]: E0129 16:32:06.364212 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:32:06 crc kubenswrapper[4813]: I0129 16:32:06.387779 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 16:32:07 crc kubenswrapper[4813]: E0129 16:32:07.361579 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:32:07 crc kubenswrapper[4813]: E0129 16:32:07.361725 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zc9n4_openshift-marketplace(0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:07 crc kubenswrapper[4813]: E0129 16:32:07.362905 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:32:08 crc kubenswrapper[4813]: E0129 16:32:08.376817 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:32:08 crc kubenswrapper[4813]: E0129 16:32:08.377347 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wd6ld_openshift-marketplace(1106c979-ac94-49f2-a9a9-cd044c3df80c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:08 crc kubenswrapper[4813]: E0129 16:32:08.378533 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:32:08 crc kubenswrapper[4813]: I0129 16:32:08.877520 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:32:09 crc kubenswrapper[4813]: I0129 16:32:09.834176 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:32:09 crc kubenswrapper[4813]: I0129 16:32:09.839339 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:32:15 crc kubenswrapper[4813]: E0129 16:32:15.242407 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:32:15 crc kubenswrapper[4813]: E0129 16:32:15.242590 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:32:16 crc kubenswrapper[4813]: E0129 16:32:16.241692 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:32:17 crc kubenswrapper[4813]: I0129 16:32:17.970996 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:32:18 crc kubenswrapper[4813]: E0129 16:32:18.244696 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:32:19 crc kubenswrapper[4813]: E0129 16:32:19.240585 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:32:19 crc kubenswrapper[4813]: I0129 16:32:19.703922 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7jhrs" Jan 29 16:32:20 crc kubenswrapper[4813]: E0129 16:32:20.242843 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:32:21 crc kubenswrapper[4813]: E0129 16:32:21.242979 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:32:21 crc kubenswrapper[4813]: E0129 16:32:21.243455 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:32:26 crc kubenswrapper[4813]: E0129 16:32:26.373839 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:32:26 crc kubenswrapper[4813]: E0129 16:32:26.374014 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52p5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jdwp2_openshift-marketplace(97b1fc85-bbb9-4308-905f-26f3258e1cc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:26 crc kubenswrapper[4813]: E0129 16:32:26.375219 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.424633 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:32:26 crc kubenswrapper[4813]: E0129 16:32:26.424851 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764d200f-49b3-4197-a22d-00b95b74f0b3" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.424863 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="764d200f-49b3-4197-a22d-00b95b74f0b3" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: E0129 16:32:26.424875 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574d9fb3-6d3d-48d3-b46a-dd8869095af7" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.424881 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="574d9fb3-6d3d-48d3-b46a-dd8869095af7" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.424987 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="574d9fb3-6d3d-48d3-b46a-dd8869095af7" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.424998 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="764d200f-49b3-4197-a22d-00b95b74f0b3" containerName="pruner" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.425583 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.428861 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.428947 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.437403 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.508495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.508567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.609981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.610087 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.610170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.628142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.748063 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:26 crc kubenswrapper[4813]: I0129 16:32:26.938234 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 16:32:27 crc kubenswrapper[4813]: I0129 16:32:27.750285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"51f62938-8d91-4f84-8d4f-74d8a2c52e95","Type":"ContainerStarted","Data":"7cc3b4aa476a437990e14595716a8fce3f013edd006396480ea7cc9a99f446ff"} Jan 29 16:32:27 crc kubenswrapper[4813]: I0129 16:32:27.750651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"51f62938-8d91-4f84-8d4f-74d8a2c52e95","Type":"ContainerStarted","Data":"9854942782dda5426f559a00b37dc59b49ffcb30c8750f24a4d2a5ff6327838e"} Jan 29 16:32:27 crc kubenswrapper[4813]: I0129 16:32:27.765602 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.7655820800000002 podStartE2EDuration="1.76558208s" podCreationTimestamp="2026-01-29 16:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:27.762496959 +0000 UTC m=+200.249700175" watchObservedRunningTime="2026-01-29 16:32:27.76558208 +0000 UTC m=+200.252785286" Jan 29 16:32:28 crc kubenswrapper[4813]: E0129 16:32:28.376695 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:32:28 crc kubenswrapper[4813]: E0129 16:32:28.376840 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rfg87_openshift-marketplace(61eb0d4e-a892-4ca0-aad1-5c2fe1039fec): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:28 crc kubenswrapper[4813]: E0129 16:32:28.378027 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:32:28 crc kubenswrapper[4813]: I0129 16:32:28.755905 4813 generic.go:334] "Generic (PLEG): container finished" podID="51f62938-8d91-4f84-8d4f-74d8a2c52e95" containerID="7cc3b4aa476a437990e14595716a8fce3f013edd006396480ea7cc9a99f446ff" exitCode=0 Jan 29 16:32:28 crc kubenswrapper[4813]: I0129 16:32:28.755949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"51f62938-8d91-4f84-8d4f-74d8a2c52e95","Type":"ContainerDied","Data":"7cc3b4aa476a437990e14595716a8fce3f013edd006396480ea7cc9a99f446ff"} Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.088086 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.155289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir\") pod \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.155761 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access\") pod \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\" (UID: \"51f62938-8d91-4f84-8d4f-74d8a2c52e95\") " Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.155396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "51f62938-8d91-4f84-8d4f-74d8a2c52e95" (UID: "51f62938-8d91-4f84-8d4f-74d8a2c52e95"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.163337 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "51f62938-8d91-4f84-8d4f-74d8a2c52e95" (UID: "51f62938-8d91-4f84-8d4f-74d8a2c52e95"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.240171 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.240240 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.250274 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.250675 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.250769 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b" gracePeriod=600 Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.257584 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.257611 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51f62938-8d91-4f84-8d4f-74d8a2c52e95-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:30 crc kubenswrapper[4813]: E0129 16:32:30.386177 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:32:30 crc kubenswrapper[4813]: E0129 16:32:30.386354 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6hb9j_openshift-marketplace(a5f3efb2-a274-4b29-bc4b-30d924188614): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:30 crc kubenswrapper[4813]: E0129 16:32:30.387550 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.767572 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b" exitCode=0 Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.767718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b"} Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.767886 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0"} Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.769599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"51f62938-8d91-4f84-8d4f-74d8a2c52e95","Type":"ContainerDied","Data":"9854942782dda5426f559a00b37dc59b49ffcb30c8750f24a4d2a5ff6327838e"} Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.769623 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9854942782dda5426f559a00b37dc59b49ffcb30c8750f24a4d2a5ff6327838e" Jan 29 16:32:30 crc kubenswrapper[4813]: I0129 16:32:30.769662 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 16:32:31 crc kubenswrapper[4813]: E0129 16:32:31.365863 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:32:31 crc kubenswrapper[4813]: E0129 16:32:31.366051 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ppft9_openshift-marketplace(91ba5590-c7f3-4892-95fb-c39fbffe7278): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:31 crc kubenswrapper[4813]: E0129 16:32:31.367247 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.828240 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:32:32 crc kubenswrapper[4813]: E0129 16:32:32.829009 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f62938-8d91-4f84-8d4f-74d8a2c52e95" containerName="pruner" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.829025 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f62938-8d91-4f84-8d4f-74d8a2c52e95" containerName="pruner" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.829187 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f62938-8d91-4f84-8d4f-74d8a2c52e95" containerName="pruner" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.829612 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.831383 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.831491 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.833788 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.990098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.990370 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:32 crc kubenswrapper[4813]: I0129 16:32:32.990479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.091818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.091921 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.091958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.092060 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.092189 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.109573 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access\") pod \"installer-9-crc\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.152432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.370100 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.370740 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wd6ld_openshift-marketplace(1106c979-ac94-49f2-a9a9-cd044c3df80c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.371123 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.371443 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-klj7p_openshift-marketplace(64f64480-2953-4d8a-8374-7ee9bee7f712): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.372167 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:32:33 crc kubenswrapper[4813]: E0129 16:32:33.372904 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.582542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 16:32:33 crc kubenswrapper[4813]: I0129 16:32:33.787170 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1712182e-1c1a-4d42-87bb-29d7df20db88","Type":"ContainerStarted","Data":"1bfceaec5146addaa11d6c0090a75c1e060e9f6b3d65bd4a4b8d7e8bfe539b8d"} Jan 29 16:32:34 crc kubenswrapper[4813]: E0129 16:32:34.361943 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:32:34 crc kubenswrapper[4813]: E0129 16:32:34.362146 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwhl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-flcbx_openshift-marketplace(d78854f3-3848-4a36-83dc-d10dd1d49d49): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:34 crc kubenswrapper[4813]: E0129 16:32:34.363283 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:32:34 crc kubenswrapper[4813]: I0129 16:32:34.799022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1712182e-1c1a-4d42-87bb-29d7df20db88","Type":"ContainerStarted","Data":"31fd60c7d4afced969510743c83a8ad6b6c68735835c3ba6a858f3db4ef08855"} Jan 29 16:32:34 crc kubenswrapper[4813]: I0129 16:32:34.823753 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.823736602 podStartE2EDuration="2.823736602s" podCreationTimestamp="2026-01-29 16:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:34.820011731 +0000 UTC m=+207.307214947" watchObservedRunningTime="2026-01-29 16:32:34.823736602 +0000 UTC m=+207.310939818" Jan 29 16:32:37 crc kubenswrapper[4813]: E0129 16:32:37.466034 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:32:37 crc kubenswrapper[4813]: E0129 16:32:37.466475 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zc9n4_openshift-marketplace(0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:37 crc kubenswrapper[4813]: E0129 16:32:37.467850 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:32:41 crc kubenswrapper[4813]: E0129 16:32:41.243193 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:32:41 crc kubenswrapper[4813]: E0129 16:32:41.243226 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:32:41 crc kubenswrapper[4813]: E0129 16:32:41.243466 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.012280 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" podUID="9c267b88-a775-4677-839f-a30c62f8be31" containerName="oauth-openshift" containerID="cri-o://e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03" gracePeriod=15 Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.380589 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.415007 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-vdnpj"] Jan 29 16:32:43 crc kubenswrapper[4813]: E0129 16:32:43.415363 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c267b88-a775-4677-839f-a30c62f8be31" containerName="oauth-openshift" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.415383 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c267b88-a775-4677-839f-a30c62f8be31" containerName="oauth-openshift" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.415555 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c267b88-a775-4677-839f-a30c62f8be31" containerName="oauth-openshift" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.416178 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.427065 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-vdnpj"] Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whqt8\" (UniqueName: \"kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526561 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526585 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526613 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526662 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526691 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526712 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526788 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526815 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526844 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526892 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.526920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session\") pod \"9c267b88-a775-4677-839f-a30c62f8be31\" (UID: \"9c267b88-a775-4677-839f-a30c62f8be31\") " Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527138 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527171 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dnd\" (UniqueName: \"kubernetes.io/projected/029c2700-9dbc-4251-a62e-0f71bb0253e7-kube-api-access-d2dnd\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-policies\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527349 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-dir\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527659 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527693 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527720 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527746 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.527963 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.528353 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.528374 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.528391 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.528850 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.534185 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.534568 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.534647 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8" (OuterVolumeSpecName: "kube-api-access-whqt8") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "kube-api-access-whqt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.534921 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.544457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.544653 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.545018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.545281 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.545461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9c267b88-a775-4677-839f-a30c62f8be31" (UID: "9c267b88-a775-4677-839f-a30c62f8be31"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629697 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-dir\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629838 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629924 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629941 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629960 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.629980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630018 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dnd\" (UniqueName: \"kubernetes.io/projected/029c2700-9dbc-4251-a62e-0f71bb0253e7-kube-api-access-d2dnd\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-policies\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630069 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630176 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630197 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whqt8\" (UniqueName: \"kubernetes.io/projected/9c267b88-a775-4677-839f-a30c62f8be31-kube-api-access-whqt8\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630209 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630222 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630234 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630246 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630258 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630269 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630281 4813 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630296 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630307 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c267b88-a775-4677-839f-a30c62f8be31-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630318 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630330 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.630343 4813 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c267b88-a775-4677-839f-a30c62f8be31-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.631445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-policies\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.631516 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029c2700-9dbc-4251-a62e-0f71bb0253e7-audit-dir\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.631614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.631932 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.633265 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.634950 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.635490 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.635487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.635778 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.635863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.636179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.636235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.636379 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029c2700-9dbc-4251-a62e-0f71bb0253e7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.648071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dnd\" (UniqueName: \"kubernetes.io/projected/029c2700-9dbc-4251-a62e-0f71bb0253e7-kube-api-access-d2dnd\") pod \"oauth-openshift-76fc545986-vdnpj\" (UID: \"029c2700-9dbc-4251-a62e-0f71bb0253e7\") " pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.737560 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.849000 4813 generic.go:334] "Generic (PLEG): container finished" podID="9c267b88-a775-4677-839f-a30c62f8be31" containerID="e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03" exitCode=0 Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.849057 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.849056 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" event={"ID":"9c267b88-a775-4677-839f-a30c62f8be31","Type":"ContainerDied","Data":"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03"} Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.849569 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2s8mj" event={"ID":"9c267b88-a775-4677-839f-a30c62f8be31","Type":"ContainerDied","Data":"6881aea29da7abd24447037c71cae5ddf840cf89eef75a1c84dad0eed07c6172"} Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.849590 4813 scope.go:117] "RemoveContainer" containerID="e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.872069 4813 scope.go:117] "RemoveContainer" containerID="e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03" Jan 29 16:32:43 crc kubenswrapper[4813]: E0129 16:32:43.872773 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03\": container with ID starting with e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03 not found: ID does not exist" containerID="e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.872963 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03"} err="failed to get container status \"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03\": rpc error: code = NotFound desc = could not find container \"e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03\": container with ID starting with e69ba139572d68e1a22337be91d0e44614257b61eab0654ed17b3085cf0cbc03 not found: ID does not exist" Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.886064 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:32:43 crc kubenswrapper[4813]: I0129 16:32:43.890582 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2s8mj"] Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.128708 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-vdnpj"] Jan 29 16:32:44 crc kubenswrapper[4813]: E0129 16:32:44.241005 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.246924 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c267b88-a775-4677-839f-a30c62f8be31" path="/var/lib/kubelet/pods/9c267b88-a775-4677-839f-a30c62f8be31/volumes" Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.859026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" event={"ID":"029c2700-9dbc-4251-a62e-0f71bb0253e7","Type":"ContainerStarted","Data":"67d6018c5002720a82b8c81c012f2933952f18c7c17409b0aefa9d99d05dc1d5"} Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.859364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" event={"ID":"029c2700-9dbc-4251-a62e-0f71bb0253e7","Type":"ContainerStarted","Data":"51b0ff802dd934dae8caef10d663ca33a5c76c7ff179c0c790e4704b540140ae"} Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.860329 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.868214 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" Jan 29 16:32:44 crc kubenswrapper[4813]: I0129 16:32:44.884296 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76fc545986-vdnpj" podStartSLOduration=27.884271179 podStartE2EDuration="27.884271179s" podCreationTimestamp="2026-01-29 16:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:32:44.878360384 +0000 UTC m=+217.365563610" watchObservedRunningTime="2026-01-29 16:32:44.884271179 +0000 UTC m=+217.371474395" Jan 29 16:32:46 crc kubenswrapper[4813]: E0129 16:32:46.244356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:32:46 crc kubenswrapper[4813]: E0129 16:32:46.244461 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:32:48 crc kubenswrapper[4813]: E0129 16:32:48.246417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:32:49 crc kubenswrapper[4813]: E0129 16:32:49.241169 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:32:52 crc kubenswrapper[4813]: E0129 16:32:52.242311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:32:55 crc kubenswrapper[4813]: E0129 16:32:55.240848 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:32:56 crc kubenswrapper[4813]: E0129 16:32:56.241191 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:32:57 crc kubenswrapper[4813]: E0129 16:32:57.242070 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:32:58 crc kubenswrapper[4813]: E0129 16:32:58.244014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:32:59 crc kubenswrapper[4813]: E0129 16:32:59.240465 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:32:59 crc kubenswrapper[4813]: E0129 16:32:59.241550 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:33:02 crc kubenswrapper[4813]: E0129 16:33:02.241287 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:33:06 crc kubenswrapper[4813]: E0129 16:33:06.241636 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:33:08 crc kubenswrapper[4813]: E0129 16:33:08.246205 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:33:10 crc kubenswrapper[4813]: E0129 16:33:10.241309 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:33:10 crc kubenswrapper[4813]: E0129 16:33:10.365576 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:33:10 crc kubenswrapper[4813]: E0129 16:33:10.365758 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rfg87_openshift-marketplace(61eb0d4e-a892-4ca0-aad1-5c2fe1039fec): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:10 crc kubenswrapper[4813]: E0129 16:33:10.367253 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.496473 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497150 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725" gracePeriod=15 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497243 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca" gracePeriod=15 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497261 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a" gracePeriod=15 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497412 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a" gracePeriod=15 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497684 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.497811 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03" gracePeriod=15 Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498551 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498572 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498590 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498599 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498607 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498614 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498623 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498629 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498646 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498654 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498659 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498667 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498783 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498798 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498811 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498820 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498832 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498843 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498850 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 16:33:11 crc kubenswrapper[4813]: E0129 16:33:11.498983 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.498996 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.500162 4813 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.500584 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.507782 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678382 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678503 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678585 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678629 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678658 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.678764 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.779887 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.779951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.779974 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780001 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780060 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780106 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780193 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780325 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780351 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780375 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780360 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.780335 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.990429 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.991839 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.992564 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725" exitCode=0 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.992597 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a" exitCode=0 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.992606 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca" exitCode=0 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.992615 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a" exitCode=2 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.992680 4813 scope.go:117] "RemoveContainer" containerID="df8a92ae257fe5d442b248c0bf3992a923cb26130f2e67f30479536607895aa9" Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.994207 4813 generic.go:334] "Generic (PLEG): container finished" podID="1712182e-1c1a-4d42-87bb-29d7df20db88" containerID="31fd60c7d4afced969510743c83a8ad6b6c68735835c3ba6a858f3db4ef08855" exitCode=0 Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.994234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1712182e-1c1a-4d42-87bb-29d7df20db88","Type":"ContainerDied","Data":"31fd60c7d4afced969510743c83a8ad6b6c68735835c3ba6a858f3db4ef08855"} Jan 29 16:33:11 crc kubenswrapper[4813]: I0129 16:33:11.995153 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:12 crc kubenswrapper[4813]: E0129 16:33:12.241557 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:33:12 crc kubenswrapper[4813]: E0129 16:33:12.242238 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-klj7p.188f40b173112407\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-klj7p.188f40b173112407 openshift-marketplace 29508 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-klj7p,UID:64f64480-2953-4d8a-8374-7ee9bee7f712,APIVersion:v1,ResourceVersion:28456,FieldPath:spec.initContainers{extract-content},},Reason:BackOff,Message:Back-off pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:31:49 +0000 UTC,LastTimestamp:2026-01-29 16:33:12.241508393 +0000 UTC m=+244.728711609,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:33:12 crc kubenswrapper[4813]: E0129 16:33:12.379910 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:33:12 crc kubenswrapper[4813]: E0129 16:33:12.380139 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ppft9_openshift-marketplace(91ba5590-c7f3-4892-95fb-c39fbffe7278): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:12 crc kubenswrapper[4813]: E0129 16:33:12.381372 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.003621 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.241253 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.242946 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.409925 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir\") pod \"1712182e-1c1a-4d42-87bb-29d7df20db88\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1712182e-1c1a-4d42-87bb-29d7df20db88" (UID: "1712182e-1c1a-4d42-87bb-29d7df20db88"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410145 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock\") pod \"1712182e-1c1a-4d42-87bb-29d7df20db88\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock" (OuterVolumeSpecName: "var-lock") pod "1712182e-1c1a-4d42-87bb-29d7df20db88" (UID: "1712182e-1c1a-4d42-87bb-29d7df20db88"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410233 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access\") pod \"1712182e-1c1a-4d42-87bb-29d7df20db88\" (UID: \"1712182e-1c1a-4d42-87bb-29d7df20db88\") " Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410753 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.410780 4813 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1712182e-1c1a-4d42-87bb-29d7df20db88-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.416554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1712182e-1c1a-4d42-87bb-29d7df20db88" (UID: "1712182e-1c1a-4d42-87bb-29d7df20db88"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.511960 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1712182e-1c1a-4d42-87bb-29d7df20db88-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.866632 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.867794 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.868413 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:13 crc kubenswrapper[4813]: I0129 16:33:13.868869 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.013238 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.014716 4813 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03" exitCode=0 Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.014803 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.014828 4813 scope.go:117] "RemoveContainer" containerID="8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.017941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1712182e-1c1a-4d42-87bb-29d7df20db88","Type":"ContainerDied","Data":"1bfceaec5146addaa11d6c0090a75c1e060e9f6b3d65bd4a4b8d7e8bfe539b8d"} Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018005 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfceaec5146addaa11d6c0090a75c1e060e9f6b3d65bd4a4b8d7e8bfe539b8d" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018017 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.017948 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018339 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018372 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.018026 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.019052 4813 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.019103 4813 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.019183 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.031029 4813 scope.go:117] "RemoveContainer" containerID="28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.041822 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.042155 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.050023 4813 scope.go:117] "RemoveContainer" containerID="ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.062838 4813 scope.go:117] "RemoveContainer" containerID="d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.076271 4813 scope.go:117] "RemoveContainer" containerID="073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.100209 4813 scope.go:117] "RemoveContainer" containerID="89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.123074 4813 scope.go:117] "RemoveContainer" containerID="8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.123581 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\": container with ID starting with 8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725 not found: ID does not exist" containerID="8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.123613 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725"} err="failed to get container status \"8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\": rpc error: code = NotFound desc = could not find container \"8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725\": container with ID starting with 8c7857c188099d3a5d941694f3b98b506c29a6d797fcd54c3375d6618eb80725 not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.123636 4813 scope.go:117] "RemoveContainer" containerID="28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.124173 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\": container with ID starting with 28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a not found: ID does not exist" containerID="28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.124194 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a"} err="failed to get container status \"28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\": rpc error: code = NotFound desc = could not find container \"28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a\": container with ID starting with 28b371db2c9e77a0316ca2ccc52eedb1f7342b12671632ee18535b5fb89b681a not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.124210 4813 scope.go:117] "RemoveContainer" containerID="ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.124702 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\": container with ID starting with ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca not found: ID does not exist" containerID="ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.124765 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca"} err="failed to get container status \"ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\": rpc error: code = NotFound desc = could not find container \"ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca\": container with ID starting with ad84981fdcd2287024e17e3ebb4caa21aa0a5fc08d8eff6b9345ced91cb5f5ca not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.124820 4813 scope.go:117] "RemoveContainer" containerID="d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.125274 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\": container with ID starting with d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a not found: ID does not exist" containerID="d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.125314 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a"} err="failed to get container status \"d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\": rpc error: code = NotFound desc = could not find container \"d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a\": container with ID starting with d4b9ce77b427606014a21ba0b5d1bda47ef9d075c3f871bfd82450fa7568a18a not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.125340 4813 scope.go:117] "RemoveContainer" containerID="073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.125821 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\": container with ID starting with 073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03 not found: ID does not exist" containerID="073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.125860 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03"} err="failed to get container status \"073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\": rpc error: code = NotFound desc = could not find container \"073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03\": container with ID starting with 073f8545e99fd26d9251e722af4405d1973f4954158b9afa2a404a2edfe7fe03 not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.125881 4813 scope.go:117] "RemoveContainer" containerID="89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.126257 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\": container with ID starting with 89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62 not found: ID does not exist" containerID="89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.126306 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62"} err="failed to get container status \"89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\": rpc error: code = NotFound desc = could not find container \"89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62\": container with ID starting with 89adf89a556ef2f33bf96e39144ec668dfad54152bf0a8d327f36844c84cbc62 not found: ID does not exist" Jan 29 16:33:14 crc kubenswrapper[4813]: E0129 16:33:14.241464 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.245355 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.318227 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:14 crc kubenswrapper[4813]: I0129 16:33:14.318997 4813 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:16 crc kubenswrapper[4813]: E0129 16:33:16.127726 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-klj7p.188f40b173112407\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-klj7p.188f40b173112407 openshift-marketplace 29508 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-klj7p,UID:64f64480-2953-4d8a-8374-7ee9bee7f712,APIVersion:v1,ResourceVersion:28456,FieldPath:spec.initContainers{extract-content},},Reason:BackOff,Message:Back-off pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:31:49 +0000 UTC,LastTimestamp:2026-01-29 16:33:12.241508393 +0000 UTC m=+244.728711609,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:33:16 crc kubenswrapper[4813]: E0129 16:33:16.359153 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:33:16 crc kubenswrapper[4813]: E0129 16:33:16.359567 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwhl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-flcbx_openshift-marketplace(d78854f3-3848-4a36-83dc-d10dd1d49d49): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:16 crc kubenswrapper[4813]: E0129 16:33:16.360935 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:33:16 crc kubenswrapper[4813]: E0129 16:33:16.537263 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:16 crc kubenswrapper[4813]: I0129 16:33:16.537771 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:16 crc kubenswrapper[4813]: W0129 16:33:16.563504 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ecdf786282af5d503436f7d3022c5ed44965aab4f605687a166f6714d6d2f666 WatchSource:0}: Error finding container ecdf786282af5d503436f7d3022c5ed44965aab4f605687a166f6714d6d2f666: Status 404 returned error can't find the container with id ecdf786282af5d503436f7d3022c5ed44965aab4f605687a166f6714d6d2f666 Jan 29 16:33:17 crc kubenswrapper[4813]: I0129 16:33:17.039596 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e"} Jan 29 16:33:17 crc kubenswrapper[4813]: I0129 16:33:17.040057 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ecdf786282af5d503436f7d3022c5ed44965aab4f605687a166f6714d6d2f666"} Jan 29 16:33:17 crc kubenswrapper[4813]: E0129 16:33:17.041144 4813 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:33:17 crc kubenswrapper[4813]: I0129 16:33:17.041248 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:17 crc kubenswrapper[4813]: E0129 16:33:17.362952 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:33:17 crc kubenswrapper[4813]: E0129 16:33:17.363290 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52p5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jdwp2_openshift-marketplace(97b1fc85-bbb9-4308-905f-26f3258e1cc1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:17 crc kubenswrapper[4813]: E0129 16:33:17.364689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:33:18 crc kubenswrapper[4813]: I0129 16:33:18.241454 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.292708 4813 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" volumeName="registry-storage" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.484368 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.484964 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.485424 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.485725 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.486003 4813 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:18 crc kubenswrapper[4813]: I0129 16:33:18.486035 4813 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.486263 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="200ms" Jan 29 16:33:18 crc kubenswrapper[4813]: E0129 16:33:18.686988 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="400ms" Jan 29 16:33:19 crc kubenswrapper[4813]: E0129 16:33:19.088641 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="800ms" Jan 29 16:33:19 crc kubenswrapper[4813]: E0129 16:33:19.889652 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="1.6s" Jan 29 16:33:21 crc kubenswrapper[4813]: E0129 16:33:21.367959 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:33:21 crc kubenswrapper[4813]: E0129 16:33:21.368546 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28276,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6hb9j_openshift-marketplace(a5f3efb2-a274-4b29-bc4b-30d924188614): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:21 crc kubenswrapper[4813]: E0129 16:33:21.370225 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:33:21 crc kubenswrapper[4813]: E0129 16:33:21.491361 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="3.2s" Jan 29 16:33:23 crc kubenswrapper[4813]: I0129 16:33:23.239432 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:23 crc kubenswrapper[4813]: I0129 16:33:23.239737 4813 status_manager.go:851] "Failed to get status for pod" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" pod="openshift-marketplace/certified-operators-rfg87" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rfg87\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:23 crc kubenswrapper[4813]: E0129 16:33:23.240289 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:33:23 crc kubenswrapper[4813]: E0129 16:33:23.374616 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:33:23 crc kubenswrapper[4813]: E0129 16:33:23.374773 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wd6ld_openshift-marketplace(1106c979-ac94-49f2-a9a9-cd044c3df80c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:23 crc kubenswrapper[4813]: E0129 16:33:23.375946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:33:24 crc kubenswrapper[4813]: E0129 16:33:24.692650 4813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.47:6443: connect: connection refused" interval="6.4s" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.238926 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.240033 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.240670 4813 status_manager.go:851] "Failed to get status for pod" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" pod="openshift-marketplace/certified-operators-rfg87" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rfg87\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.241363 4813 status_manager.go:851] "Failed to get status for pod" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" pod="openshift-marketplace/redhat-marketplace-ppft9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppft9\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.241856 4813 status_manager.go:851] "Failed to get status for pod" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" pod="openshift-marketplace/certified-operators-rfg87" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rfg87\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.242284 4813 status_manager.go:851] "Failed to get status for pod" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" pod="openshift-marketplace/redhat-marketplace-ppft9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppft9\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.242612 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:25 crc kubenswrapper[4813]: E0129 16:33:25.243262 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.261294 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.261338 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:25 crc kubenswrapper[4813]: E0129 16:33:25.261803 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:25 crc kubenswrapper[4813]: I0129 16:33:25.262447 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:25 crc kubenswrapper[4813]: E0129 16:33:25.374504 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:33:25 crc kubenswrapper[4813]: E0129 16:33:25.375170 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djckm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-klj7p_openshift-marketplace(64f64480-2953-4d8a-8374-7ee9bee7f712): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:25 crc kubenswrapper[4813]: E0129 16:33:25.376717 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.098819 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.100019 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c" exitCode=1 Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.100140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c"} Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.101573 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102027 4813 status_manager.go:851] "Failed to get status for pod" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" pod="openshift-marketplace/certified-operators-rfg87" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rfg87\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102240 4813 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="75adf6e7d11623097dd20d0673302a6be23d25e8ec14cf3ebb840351ea01603a" exitCode=0 Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102333 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"75adf6e7d11623097dd20d0673302a6be23d25e8ec14cf3ebb840351ea01603a"} Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102420 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84340a1e031c8c5d9754cb0e92d9dc7104392f1ddb9beb694e1e049db46e55a1"} Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102452 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102925 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102967 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.102948 4813 status_manager.go:851] "Failed to get status for pod" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" pod="openshift-marketplace/redhat-marketplace-ppft9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppft9\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: E0129 16:33:26.103396 4813 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.103507 4813 status_manager.go:851] "Failed to get status for pod" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.103725 4813 status_manager.go:851] "Failed to get status for pod" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" pod="openshift-marketplace/certified-operators-rfg87" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rfg87\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.103948 4813 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.104178 4813 status_manager.go:851] "Failed to get status for pod" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" pod="openshift-marketplace/redhat-marketplace-ppft9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ppft9\": dial tcp 38.102.83.47:6443: connect: connection refused" Jan 29 16:33:26 crc kubenswrapper[4813]: I0129 16:33:26.105081 4813 scope.go:117] "RemoveContainer" containerID="e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c" Jan 29 16:33:26 crc kubenswrapper[4813]: E0129 16:33:26.130634 4813 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-klj7p.188f40b173112407\": dial tcp 38.102.83.47:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-klj7p.188f40b173112407 openshift-marketplace 29508 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-klj7p,UID:64f64480-2953-4d8a-8374-7ee9bee7f712,APIVersion:v1,ResourceVersion:28456,FieldPath:spec.initContainers{extract-content},},Reason:BackOff,Message:Back-off pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 16:31:49 +0000 UTC,LastTimestamp:2026-01-29 16:33:12.241508393 +0000 UTC m=+244.728711609,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.128996 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.129496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e62ddf9512438d086fcc06e09d2dcc0421f8865d58731a7fce6995ef54216c3"} Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.140358 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"617b00bc140bc435a7ca58bd811712a68d7cfb8a9c748bb3be8ce18a45cedde1"} Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.140407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7dd135d14d0cb257e23d215554ced228c0089d53a3bcf08d9ca0d28218d019ad"} Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.140416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc88e11bb607357929d5fe721cc9742e003934eae852dd20582b558514fba3e9"} Jan 29 16:33:27 crc kubenswrapper[4813]: I0129 16:33:27.140424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7934f9c6a20a1ac9135b4a3aac5d8601c5f606ccf32cc0db4827342b18bbac54"} Jan 29 16:33:28 crc kubenswrapper[4813]: I0129 16:33:28.150847 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:28 crc kubenswrapper[4813]: I0129 16:33:28.150876 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:28 crc kubenswrapper[4813]: I0129 16:33:28.151088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e58cdcd3f2292ad55ec70e56a42ab41e4da0175c510e0c4096cd232455ae407"} Jan 29 16:33:28 crc kubenswrapper[4813]: I0129 16:33:28.151145 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:28 crc kubenswrapper[4813]: E0129 16:33:28.372649 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:33:28 crc kubenswrapper[4813]: E0129 16:33:28.373449 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zc9n4_openshift-marketplace(0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:28 crc kubenswrapper[4813]: E0129 16:33:28.374984 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:33:30 crc kubenswrapper[4813]: I0129 16:33:30.263247 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:30 crc kubenswrapper[4813]: I0129 16:33:30.263349 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:30 crc kubenswrapper[4813]: I0129 16:33:30.269411 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:31 crc kubenswrapper[4813]: E0129 16:33:31.243102 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:33:32 crc kubenswrapper[4813]: E0129 16:33:32.241177 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:33:32 crc kubenswrapper[4813]: I0129 16:33:32.867021 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:33:33 crc kubenswrapper[4813]: I0129 16:33:33.159423 4813 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.178088 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.178168 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.182302 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.184829 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50b836a8-68e3-46f3-893a-405e3b4f2be8" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.749729 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.749956 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:33:34 crc kubenswrapper[4813]: I0129 16:33:34.750018 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:33:35 crc kubenswrapper[4813]: I0129 16:33:35.185510 4813 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:35 crc kubenswrapper[4813]: I0129 16:33:35.185576 4813 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5812dc35-171f-4cad-8da9-9f17499b5131" Jan 29 16:33:35 crc kubenswrapper[4813]: E0129 16:33:35.242467 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:33:36 crc kubenswrapper[4813]: E0129 16:33:36.241747 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:33:37 crc kubenswrapper[4813]: E0129 16:33:37.241277 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:33:38 crc kubenswrapper[4813]: E0129 16:33:38.255218 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:33:38 crc kubenswrapper[4813]: E0129 16:33:38.255880 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:33:38 crc kubenswrapper[4813]: I0129 16:33:38.283266 4813 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50b836a8-68e3-46f3-893a-405e3b4f2be8" Jan 29 16:33:40 crc kubenswrapper[4813]: E0129 16:33:40.242080 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:33:42 crc kubenswrapper[4813]: I0129 16:33:42.741751 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 16:33:43 crc kubenswrapper[4813]: I0129 16:33:43.181339 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 16:33:43 crc kubenswrapper[4813]: I0129 16:33:43.321251 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 16:33:43 crc kubenswrapper[4813]: I0129 16:33:43.378586 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 16:33:43 crc kubenswrapper[4813]: I0129 16:33:43.608383 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.184275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.389591 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.646681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.664021 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.697851 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.750603 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.750685 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.787617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 16:33:44 crc kubenswrapper[4813]: I0129 16:33:44.901375 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.014559 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.035759 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.191890 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.233636 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 16:33:45 crc kubenswrapper[4813]: E0129 16:33:45.242946 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-flcbx" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" Jan 29 16:33:45 crc kubenswrapper[4813]: E0129 16:33:45.243258 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.293167 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.417348 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.499638 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.543626 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.595130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.629933 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.669199 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.867648 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 16:33:45 crc kubenswrapper[4813]: I0129 16:33:45.888639 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.008603 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.134635 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.198070 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.208902 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.371472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.408371 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.455303 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.461701 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.574027 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.689617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.724518 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.733273 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.777837 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.805414 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 16:33:46 crc kubenswrapper[4813]: I0129 16:33:46.913071 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.029228 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.098045 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.194911 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.249568 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.276350 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.308242 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.339397 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.376751 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.438248 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.543881 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.621787 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.725449 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.771406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.771900 4813 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.826232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.878356 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 16:33:47 crc kubenswrapper[4813]: I0129 16:33:47.956513 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.007147 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.009562 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.030999 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.080153 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.123419 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.215796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 16:33:48 crc kubenswrapper[4813]: E0129 16:33:48.249284 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rfg87" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.276942 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.306455 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.340897 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.395606 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.487582 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.491363 4813 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.495620 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.495677 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.500025 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.514406 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.514385792 podStartE2EDuration="15.514385792s" podCreationTimestamp="2026-01-29 16:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:33:48.51300611 +0000 UTC m=+281.000209346" watchObservedRunningTime="2026-01-29 16:33:48.514385792 +0000 UTC m=+281.001589018" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.562184 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.566594 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.708636 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.784321 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.820108 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.898324 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.914676 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.957987 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 16:33:48 crc kubenswrapper[4813]: I0129 16:33:48.988415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.070451 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.094512 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.113374 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.200073 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.216711 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.224998 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.225383 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 16:33:49 crc kubenswrapper[4813]: E0129 16:33:49.241421 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6hb9j" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.386209 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.444743 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.462839 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.491158 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.622710 4813 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.646379 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.655423 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.758470 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.800269 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.863599 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.869457 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.882489 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.942905 4813 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.952524 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 16:33:49 crc kubenswrapper[4813]: I0129 16:33:49.970722 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.009806 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.201335 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.228155 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.228737 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 16:33:50 crc kubenswrapper[4813]: E0129 16:33:50.241053 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ppft9" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.358539 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.370573 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.467617 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.512852 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.602203 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.645222 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.687696 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.706450 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.714741 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.939368 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 16:33:50 crc kubenswrapper[4813]: I0129 16:33:50.957763 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.004540 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.065786 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.081332 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.165263 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.171455 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.187463 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.198171 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.208199 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.256522 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.581204 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.634253 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.646981 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.705790 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.787798 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.797231 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.822806 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.848964 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.907244 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.959812 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 16:33:51 crc kubenswrapper[4813]: I0129 16:33:51.961042 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.136846 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 16:33:52 crc kubenswrapper[4813]: E0129 16:33:52.241185 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wd6ld" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" Jan 29 16:33:52 crc kubenswrapper[4813]: E0129 16:33:52.241599 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klj7p" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.266360 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.327576 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.384468 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.440929 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.458002 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.488289 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.490986 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.493645 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.567333 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.695266 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.728825 4813 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.732801 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.741074 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.787532 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.814016 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 16:33:52 crc kubenswrapper[4813]: I0129 16:33:52.820444 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.122766 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 16:33:53 crc kubenswrapper[4813]: E0129 16:33:53.242041 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zc9n4" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.297688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.370338 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.388344 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.415818 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.480826 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.487808 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.504204 4813 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.561715 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.642845 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.667528 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.700404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.745064 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.811840 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.851891 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.905584 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.969415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 16:33:53 crc kubenswrapper[4813]: I0129 16:33:53.995287 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.047468 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.095224 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.129746 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.196965 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.209688 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.271393 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.448698 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.465912 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.482406 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.510871 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.554557 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.562691 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.567497 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.615444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.675631 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.712752 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.716453 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.750195 4813 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.750281 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.750345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.751144 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5e62ddf9512438d086fcc06e09d2dcc0421f8865d58731a7fce6995ef54216c3"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.751289 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5e62ddf9512438d086fcc06e09d2dcc0421f8865d58731a7fce6995ef54216c3" gracePeriod=30 Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.766898 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.837041 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.837370 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.935311 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.964986 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 16:33:54 crc kubenswrapper[4813]: I0129 16:33:54.975819 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.035370 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.171791 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.233613 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.304812 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.307878 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.390384 4813 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.390600 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e" gracePeriod=5 Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.410245 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.546528 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.596910 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.605223 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.677689 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.728952 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.822606 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.890219 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.899222 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 16:33:55 crc kubenswrapper[4813]: I0129 16:33:55.984637 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.021342 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.042213 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.174043 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.182004 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.241509 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:33:56 crc kubenswrapper[4813]: E0129 16:33:56.241869 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdwp2" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.261054 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.288624 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.313820 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.384797 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.403784 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.456292 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.623296 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.658613 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 16:33:56 crc kubenswrapper[4813]: I0129 16:33:56.725713 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.000251 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.035628 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.070801 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.089564 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.348863 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.376475 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.433330 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.608471 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.620007 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.624503 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.626232 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.635627 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.639763 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.646577 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.663571 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.665013 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" containerID="cri-o://b4e67949b6efb1b933910bfe0fc3d3677424bececd633cd9fe7353a9019ea0d0" gracePeriod=30 Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.689007 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.694815 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.701766 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.708052 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.741851 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.748933 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:33:57 crc kubenswrapper[4813]: I0129 16:33:57.794941 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.023752 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.036908 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.055605 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.116793 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content\") pod \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.117562 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr64c\" (UniqueName: \"kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c\") pod \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.117604 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities\") pod \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\" (UID: \"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.119189 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities" (OuterVolumeSpecName: "utilities") pod "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" (UID: "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.119472 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" (UID: "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.138356 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.138373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c" (OuterVolumeSpecName: "kube-api-access-rr64c") pod "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" (UID: "61eb0d4e-a892-4ca0-aad1-5c2fe1039fec"). InnerVolumeSpecName "kube-api-access-rr64c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.140395 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.172206 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.185246 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.185457 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.205697 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.205940 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.219412 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.219645 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr64c\" (UniqueName: \"kubernetes.io/projected/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-kube-api-access-rr64c\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.219697 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.227223 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.231578 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.236668 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.311964 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdwp2" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.312212 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdwp2" event={"ID":"97b1fc85-bbb9-4308-905f-26f3258e1cc1","Type":"ContainerDied","Data":"c06956778438330260594cc1973f5d23214b7a3ae63ef24a75e3d1de8189a554"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.312315 4813 scope.go:117] "RemoveContainer" containerID="93cac358cd759caa4af333e1a82bd732094f2e6d91253a1fedf4d18ce4f57193" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.313933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-flcbx" event={"ID":"d78854f3-3848-4a36-83dc-d10dd1d49d49","Type":"ContainerDied","Data":"a9f1fc1830537818c96b9d57c0b0b3bf4c8f3859f9884e3fac36fca26a0be1ef"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.314056 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-flcbx" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.315543 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9n4" event={"ID":"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2","Type":"ContainerDied","Data":"adfb2f6cdbb310803bb0535d200040b448e90383f49b9a70ac2a1ff1f9f78e95"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.315612 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9n4" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.318681 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wd6ld" event={"ID":"1106c979-ac94-49f2-a9a9-cd044c3df80c","Type":"ContainerDied","Data":"0f62a9a59bb776e5467aa88b65e25d42514d50f09cb83f45874905f0ae378087"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.318822 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wd6ld" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321194 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl\") pod \"91ba5590-c7f3-4892-95fb-c39fbffe7278\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321486 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content\") pod \"d78854f3-3848-4a36-83dc-d10dd1d49d49\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp6vk\" (UniqueName: \"kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk\") pod \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321589 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhl2\" (UniqueName: \"kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2\") pod \"d78854f3-3848-4a36-83dc-d10dd1d49d49\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities\") pod \"91ba5590-c7f3-4892-95fb-c39fbffe7278\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321664 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djckm\" (UniqueName: \"kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm\") pod \"64f64480-2953-4d8a-8374-7ee9bee7f712\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities\") pod \"1106c979-ac94-49f2-a9a9-cd044c3df80c\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28276\" (UniqueName: \"kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276\") pod \"a5f3efb2-a274-4b29-bc4b-30d924188614\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities\") pod \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content\") pod \"1106c979-ac94-49f2-a9a9-cd044c3df80c\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content\") pod \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\" (UID: \"0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content\") pod \"64f64480-2953-4d8a-8374-7ee9bee7f712\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities\") pod \"64f64480-2953-4d8a-8374-7ee9bee7f712\" (UID: \"64f64480-2953-4d8a-8374-7ee9bee7f712\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.321980 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content\") pod \"a5f3efb2-a274-4b29-bc4b-30d924188614\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content\") pod \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322168 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities\") pod \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content\") pod \"91ba5590-c7f3-4892-95fb-c39fbffe7278\" (UID: \"91ba5590-c7f3-4892-95fb-c39fbffe7278\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322261 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pxv6\" (UniqueName: \"kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6\") pod \"1106c979-ac94-49f2-a9a9-cd044c3df80c\" (UID: \"1106c979-ac94-49f2-a9a9-cd044c3df80c\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322429 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities\") pod \"d78854f3-3848-4a36-83dc-d10dd1d49d49\" (UID: \"d78854f3-3848-4a36-83dc-d10dd1d49d49\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52p5k\" (UniqueName: \"kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k\") pod \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\" (UID: \"97b1fc85-bbb9-4308-905f-26f3258e1cc1\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.322554 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities\") pod \"a5f3efb2-a274-4b29-bc4b-30d924188614\" (UID: \"a5f3efb2-a274-4b29-bc4b-30d924188614\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.324137 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities" (OuterVolumeSpecName: "utilities") pod "a5f3efb2-a274-4b29-bc4b-30d924188614" (UID: "a5f3efb2-a274-4b29-bc4b-30d924188614"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.324279 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.324695 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klj7p" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.324767 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klj7p" event={"ID":"64f64480-2953-4d8a-8374-7ee9bee7f712","Type":"ContainerDied","Data":"aaea0958c30323e47269e4fce3b55e175381a55f652448002afeb47b12200d5c"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.325201 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97b1fc85-bbb9-4308-905f-26f3258e1cc1" (UID: "97b1fc85-bbb9-4308-905f-26f3258e1cc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.325554 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities" (OuterVolumeSpecName: "utilities") pod "97b1fc85-bbb9-4308-905f-26f3258e1cc1" (UID: "97b1fc85-bbb9-4308-905f-26f3258e1cc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.325798 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5f3efb2-a274-4b29-bc4b-30d924188614" (UID: "a5f3efb2-a274-4b29-bc4b-30d924188614"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.326807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ppft9" event={"ID":"91ba5590-c7f3-4892-95fb-c39fbffe7278","Type":"ContainerDied","Data":"5f9f90451dae016cb0b0fcb381d6e37599a215243657f8dcba8040c11bca74f6"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.326894 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ppft9" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.327083 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276" (OuterVolumeSpecName: "kube-api-access-28276") pod "a5f3efb2-a274-4b29-bc4b-30d924188614" (UID: "a5f3efb2-a274-4b29-bc4b-30d924188614"). InnerVolumeSpecName "kube-api-access-28276". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.327560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk" (OuterVolumeSpecName: "kube-api-access-mp6vk") pod "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" (UID: "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2"). InnerVolumeSpecName "kube-api-access-mp6vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.327939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rfg87" event={"ID":"61eb0d4e-a892-4ca0-aad1-5c2fe1039fec","Type":"ContainerDied","Data":"b737d0f4d035939f73a965fe491564415bec558c86df63cdce743c9494d47bb2"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.328015 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rfg87" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.329703 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k" (OuterVolumeSpecName: "kube-api-access-52p5k") pod "97b1fc85-bbb9-4308-905f-26f3258e1cc1" (UID: "97b1fc85-bbb9-4308-905f-26f3258e1cc1"). InnerVolumeSpecName "kube-api-access-52p5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.329763 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" (UID: "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.329777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d78854f3-3848-4a36-83dc-d10dd1d49d49" (UID: "d78854f3-3848-4a36-83dc-d10dd1d49d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330382 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91ba5590-c7f3-4892-95fb-c39fbffe7278" (UID: "91ba5590-c7f3-4892-95fb-c39fbffe7278"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330447 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities" (OuterVolumeSpecName: "utilities") pod "91ba5590-c7f3-4892-95fb-c39fbffe7278" (UID: "91ba5590-c7f3-4892-95fb-c39fbffe7278"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330443 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities" (OuterVolumeSpecName: "utilities") pod "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" (UID: "0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330457 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1106c979-ac94-49f2-a9a9-cd044c3df80c" (UID: "1106c979-ac94-49f2-a9a9-cd044c3df80c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities" (OuterVolumeSpecName: "utilities") pod "1106c979-ac94-49f2-a9a9-cd044c3df80c" (UID: "1106c979-ac94-49f2-a9a9-cd044c3df80c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.330879 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities" (OuterVolumeSpecName: "utilities") pod "d78854f3-3848-4a36-83dc-d10dd1d49d49" (UID: "d78854f3-3848-4a36-83dc-d10dd1d49d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.331230 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hb9j" event={"ID":"a5f3efb2-a274-4b29-bc4b-30d924188614","Type":"ContainerDied","Data":"7de9752add78416bc752cd729c8f7d8c233eea056aa3b50e95f653d76ff6e3b0"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.331309 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hb9j" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.331869 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm" (OuterVolumeSpecName: "kube-api-access-djckm") pod "64f64480-2953-4d8a-8374-7ee9bee7f712" (UID: "64f64480-2953-4d8a-8374-7ee9bee7f712"). InnerVolumeSpecName "kube-api-access-djckm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.332416 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities" (OuterVolumeSpecName: "utilities") pod "64f64480-2953-4d8a-8374-7ee9bee7f712" (UID: "64f64480-2953-4d8a-8374-7ee9bee7f712"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.334065 4813 generic.go:334] "Generic (PLEG): container finished" podID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerID="b4e67949b6efb1b933910bfe0fc3d3677424bececd633cd9fe7353a9019ea0d0" exitCode=0 Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.334162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" event={"ID":"84994fa0-3f61-4bca-b679-6bc0a4cb1558","Type":"ContainerDied","Data":"b4e67949b6efb1b933910bfe0fc3d3677424bececd633cd9fe7353a9019ea0d0"} Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.334436 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64f64480-2953-4d8a-8374-7ee9bee7f712" (UID: "64f64480-2953-4d8a-8374-7ee9bee7f712"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.335089 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.339640 4813 scope.go:117] "RemoveContainer" containerID="f4c574b1a2c3f1e62bd1f345bf105c2f1afefa48383690a57be6221feb12379e" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.339656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl" (OuterVolumeSpecName: "kube-api-access-qkdwl") pod "91ba5590-c7f3-4892-95fb-c39fbffe7278" (UID: "91ba5590-c7f3-4892-95fb-c39fbffe7278"). InnerVolumeSpecName "kube-api-access-qkdwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.339737 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6" (OuterVolumeSpecName: "kube-api-access-2pxv6") pod "1106c979-ac94-49f2-a9a9-cd044c3df80c" (UID: "1106c979-ac94-49f2-a9a9-cd044c3df80c"). InnerVolumeSpecName "kube-api-access-2pxv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.346782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2" (OuterVolumeSpecName: "kube-api-access-cwhl2") pod "d78854f3-3848-4a36-83dc-d10dd1d49d49" (UID: "d78854f3-3848-4a36-83dc-d10dd1d49d49"). InnerVolumeSpecName "kube-api-access-cwhl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.356083 4813 scope.go:117] "RemoveContainer" containerID="c9fe0c1cf128be6497ce86fb75fc16d5aeae3009a6149bf8f8e9b8e3ef739d9f" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.379817 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.400639 4813 scope.go:117] "RemoveContainer" containerID="907ace76a173f3cc5cca38f2aa860eaf5fce7e7395fef5ef2892c0c1b8456ba0" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.403401 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.419220 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.423406 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rfg87"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.423762 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca\") pod \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.423799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhr4z\" (UniqueName: \"kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z\") pod \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.423822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics\") pod \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\" (UID: \"84994fa0-3f61-4bca-b679-6bc0a4cb1558\") " Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424014 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424032 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f64480-2953-4d8a-8374-7ee9bee7f712-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424042 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424051 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424060 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b1fc85-bbb9-4308-905f-26f3258e1cc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424069 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424077 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pxv6\" (UniqueName: \"kubernetes.io/projected/1106c979-ac94-49f2-a9a9-cd044c3df80c-kube-api-access-2pxv6\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424087 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424096 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52p5k\" (UniqueName: \"kubernetes.io/projected/97b1fc85-bbb9-4308-905f-26f3258e1cc1-kube-api-access-52p5k\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424104 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5f3efb2-a274-4b29-bc4b-30d924188614-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424127 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdwl\" (UniqueName: \"kubernetes.io/projected/91ba5590-c7f3-4892-95fb-c39fbffe7278-kube-api-access-qkdwl\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424135 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78854f3-3848-4a36-83dc-d10dd1d49d49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424143 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp6vk\" (UniqueName: \"kubernetes.io/projected/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-kube-api-access-mp6vk\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424151 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwhl2\" (UniqueName: \"kubernetes.io/projected/d78854f3-3848-4a36-83dc-d10dd1d49d49-kube-api-access-cwhl2\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424160 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ba5590-c7f3-4892-95fb-c39fbffe7278-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424168 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djckm\" (UniqueName: \"kubernetes.io/projected/64f64480-2953-4d8a-8374-7ee9bee7f712-kube-api-access-djckm\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424176 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424185 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28276\" (UniqueName: \"kubernetes.io/projected/a5f3efb2-a274-4b29-bc4b-30d924188614-kube-api-access-28276\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424193 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424202 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1106c979-ac94-49f2-a9a9-cd044c3df80c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424214 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.424403 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "84994fa0-3f61-4bca-b679-6bc0a4cb1558" (UID: "84994fa0-3f61-4bca-b679-6bc0a4cb1558"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.426800 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "84994fa0-3f61-4bca-b679-6bc0a4cb1558" (UID: "84994fa0-3f61-4bca-b679-6bc0a4cb1558"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.426811 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z" (OuterVolumeSpecName: "kube-api-access-xhr4z") pod "84994fa0-3f61-4bca-b679-6bc0a4cb1558" (UID: "84994fa0-3f61-4bca-b679-6bc0a4cb1558"). InnerVolumeSpecName "kube-api-access-xhr4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.434672 4813 scope.go:117] "RemoveContainer" containerID="59e3b7bfc45843baefb30d50f86007f0ec7d79269be3efe1606188e75588ce97" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.450441 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.453963 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6hb9j"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.457888 4813 scope.go:117] "RemoveContainer" containerID="05c84ecf83242e95e61100c660d1f96020f53759455d3314d081ee4731f7e7c7" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.477303 4813 scope.go:117] "RemoveContainer" containerID="906c446bda213dcbe91f8d599dfe0ea9e2e12678807673684d7ee564319c150e" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.491412 4813 scope.go:117] "RemoveContainer" containerID="325cf0ad115dc23b452ac9d6f45ea6ab48bfdb80909645b8e711ae3be5c2a82d" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.499972 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.526309 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.526392 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhr4z\" (UniqueName: \"kubernetes.io/projected/84994fa0-3f61-4bca-b679-6bc0a4cb1558-kube-api-access-xhr4z\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.526402 4813 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84994fa0-3f61-4bca-b679-6bc0a4cb1558-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.570141 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.683168 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.692004 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-flcbx"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.708424 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.713517 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdwp2"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.750459 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.755765 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9n4"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.778137 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.783358 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wd6ld"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.819590 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.824758 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ppft9"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.836032 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.840996 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.843815 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.847967 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-klj7p"] Jan 29 16:33:58 crc kubenswrapper[4813]: I0129 16:33:58.965853 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.104022 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.342241 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.342232 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jw9lk" event={"ID":"84994fa0-3f61-4bca-b679-6bc0a4cb1558","Type":"ContainerDied","Data":"3ab6e5be3b0dbfaeed85cfab0a3861dc411f86f0ec2696593efecd9ec520642c"} Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.342380 4813 scope.go:117] "RemoveContainer" containerID="b4e67949b6efb1b933910bfe0fc3d3677424bececd633cd9fe7353a9019ea0d0" Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.373485 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.380542 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jw9lk"] Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.543955 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 16:33:59 crc kubenswrapper[4813]: I0129 16:33:59.958242 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.250152 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" path="/var/lib/kubelet/pods/0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.252381 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" path="/var/lib/kubelet/pods/1106c979-ac94-49f2-a9a9-cd044c3df80c/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.252845 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" path="/var/lib/kubelet/pods/61eb0d4e-a892-4ca0-aad1-5c2fe1039fec/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.253330 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" path="/var/lib/kubelet/pods/64f64480-2953-4d8a-8374-7ee9bee7f712/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.254249 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" path="/var/lib/kubelet/pods/84994fa0-3f61-4bca-b679-6bc0a4cb1558/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.254796 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" path="/var/lib/kubelet/pods/91ba5590-c7f3-4892-95fb-c39fbffe7278/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.255271 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" path="/var/lib/kubelet/pods/97b1fc85-bbb9-4308-905f-26f3258e1cc1/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.255700 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" path="/var/lib/kubelet/pods/a5f3efb2-a274-4b29-bc4b-30d924188614/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.256652 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" path="/var/lib/kubelet/pods/d78854f3-3848-4a36-83dc-d10dd1d49d49/volumes" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.984829 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:34:00 crc kubenswrapper[4813]: I0129 16:34:00.984963 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.005970 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159709 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159779 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159803 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159848 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159844 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159883 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159931 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.159998 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.160393 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.160425 4813 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.160483 4813 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.160507 4813 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.172844 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.261445 4813 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.364284 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.364348 4813 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e" exitCode=137 Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.364397 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.364402 4813 scope.go:117] "RemoveContainer" containerID="940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.380283 4813 scope.go:117] "RemoveContainer" containerID="940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e" Jan 29 16:34:01 crc kubenswrapper[4813]: E0129 16:34:01.381071 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e\": container with ID starting with 940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e not found: ID does not exist" containerID="940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e" Jan 29 16:34:01 crc kubenswrapper[4813]: I0129 16:34:01.381132 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e"} err="failed to get container status \"940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e\": rpc error: code = NotFound desc = could not find container \"940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e\": container with ID starting with 940b49532a645639b2c26a654c166c71898ca516a4e581d605811e3ca3093b2e not found: ID does not exist" Jan 29 16:34:02 crc kubenswrapper[4813]: I0129 16:34:02.246474 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 16:34:08 crc kubenswrapper[4813]: I0129 16:34:08.044429 4813 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 16:34:25 crc kubenswrapper[4813]: I0129 16:34:25.483507 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:34:25 crc kubenswrapper[4813]: I0129 16:34:25.486214 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 16:34:25 crc kubenswrapper[4813]: I0129 16:34:25.486267 4813 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5e62ddf9512438d086fcc06e09d2dcc0421f8865d58731a7fce6995ef54216c3" exitCode=137 Jan 29 16:34:25 crc kubenswrapper[4813]: I0129 16:34:25.486300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5e62ddf9512438d086fcc06e09d2dcc0421f8865d58731a7fce6995ef54216c3"} Jan 29 16:34:25 crc kubenswrapper[4813]: I0129 16:34:25.486335 4813 scope.go:117] "RemoveContainer" containerID="e790d1e5c814ef48ca5071bf9c45d9d9e631048f9c399700f9473df2051e5d5c" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068300 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6m45"] Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068550 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068580 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068597 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068605 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068621 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068640 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068650 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068657 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068669 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068677 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068687 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068695 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068707 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068718 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068753 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068765 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068780 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068790 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068805 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068814 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: E0129 16:34:26.068825 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" containerName="installer" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068833 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" containerName="installer" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068936 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1712182e-1c1a-4d42-87bb-29d7df20db88" containerName="installer" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068950 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f64480-2953-4d8a-8374-7ee9bee7f712" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068961 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78854f3-3848-4a36-83dc-d10dd1d49d49" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068973 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c65b4cb-fbe4-4d41-ae09-cf137cd6a4b2" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068987 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="84994fa0-3f61-4bca-b679-6bc0a4cb1558" containerName="marketplace-operator" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.068997 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="61eb0d4e-a892-4ca0-aad1-5c2fe1039fec" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069013 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b1fc85-bbb9-4308-905f-26f3258e1cc1" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069024 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ba5590-c7f3-4892-95fb-c39fbffe7278" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069031 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069045 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1106c979-ac94-49f2-a9a9-cd044c3df80c" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069058 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f3efb2-a274-4b29-bc4b-30d924188614" containerName="extract-utilities" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.069923 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.073628 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.073686 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.076352 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.085626 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6m45"] Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.088774 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-catalog-content\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.088910 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8nx\" (UniqueName: \"kubernetes.io/projected/4e573ebb-94a9-440d-bba1-58251a12dfb9-kube-api-access-gz8nx\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.089033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-utilities\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.189985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-catalog-content\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.190081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8nx\" (UniqueName: \"kubernetes.io/projected/4e573ebb-94a9-440d-bba1-58251a12dfb9-kube-api-access-gz8nx\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.190148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-utilities\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.190781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-utilities\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.190781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e573ebb-94a9-440d-bba1-58251a12dfb9-catalog-content\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.231298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8nx\" (UniqueName: \"kubernetes.io/projected/4e573ebb-94a9-440d-bba1-58251a12dfb9-kube-api-access-gz8nx\") pod \"certified-operators-m6m45\" (UID: \"4e573ebb-94a9-440d-bba1-58251a12dfb9\") " pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.260595 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c24v5"] Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.261942 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.264704 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.275929 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c24v5"] Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.292392 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc62\" (UniqueName: \"kubernetes.io/projected/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-kube-api-access-7bc62\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.292516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-utilities\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.292532 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-catalog-content\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.388272 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.393953 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc62\" (UniqueName: \"kubernetes.io/projected/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-kube-api-access-7bc62\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.394014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-catalog-content\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.394034 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-utilities\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.394505 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-utilities\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.394583 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-catalog-content\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.414751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc62\" (UniqueName: \"kubernetes.io/projected/3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1-kube-api-access-7bc62\") pod \"community-operators-c24v5\" (UID: \"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1\") " pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.496760 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.498431 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c9e7f0946411c22a00671e3feb1e01b6880d4db24ce7073a3635c3a935864806"} Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.580003 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.596075 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6m45"] Jan 29 16:34:26 crc kubenswrapper[4813]: I0129 16:34:26.794910 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c24v5"] Jan 29 16:34:26 crc kubenswrapper[4813]: W0129 16:34:26.811972 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bd5bef2_b1f5_4fa4_bf92_ea5a91f007b1.slice/crio-49fb01dac2ce125ee2a5d00ba59685bd4783fb98a6cc740117a881b2c42e1017 WatchSource:0}: Error finding container 49fb01dac2ce125ee2a5d00ba59685bd4783fb98a6cc740117a881b2c42e1017: Status 404 returned error can't find the container with id 49fb01dac2ce125ee2a5d00ba59685bd4783fb98a6cc740117a881b2c42e1017 Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.508473 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e573ebb-94a9-440d-bba1-58251a12dfb9" containerID="a923cd1cac744ce53a8a922947967fe2389ec57701a73a551a572e6c07a80c40" exitCode=0 Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.508627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6m45" event={"ID":"4e573ebb-94a9-440d-bba1-58251a12dfb9","Type":"ContainerDied","Data":"a923cd1cac744ce53a8a922947967fe2389ec57701a73a551a572e6c07a80c40"} Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.508700 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6m45" event={"ID":"4e573ebb-94a9-440d-bba1-58251a12dfb9","Type":"ContainerStarted","Data":"808a5e0f63bc28822f4fdc5376eac18cd21c99ee1eb48d12b65d3f659943cbfd"} Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.510396 4813 generic.go:334] "Generic (PLEG): container finished" podID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" containerID="d239174bbcbb18bbdbf8177be52c53c0fa3434c15ca48a8e4eba58654e27efd4" exitCode=0 Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.510483 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c24v5" event={"ID":"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1","Type":"ContainerDied","Data":"d239174bbcbb18bbdbf8177be52c53c0fa3434c15ca48a8e4eba58654e27efd4"} Jan 29 16:34:27 crc kubenswrapper[4813]: I0129 16:34:27.510526 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c24v5" event={"ID":"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1","Type":"ContainerStarted","Data":"49fb01dac2ce125ee2a5d00ba59685bd4783fb98a6cc740117a881b2c42e1017"} Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.633751 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.633908 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz8nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6m45_openshift-marketplace(4e573ebb-94a9-440d-bba1-58251a12dfb9): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.635103 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.639901 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.640049 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c24v5_openshift-marketplace(3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:27 crc kubenswrapper[4813]: E0129 16:34:27.641477 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:34:28 crc kubenswrapper[4813]: E0129 16:34:28.518689 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:34:28 crc kubenswrapper[4813]: E0129 16:34:28.518819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.657175 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p9jg2"] Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.658172 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.660578 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.666969 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9jg2"] Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.821525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-catalog-content\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.821580 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-utilities\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.821644 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmdb\" (UniqueName: \"kubernetes.io/projected/b560cafb-e64c-45b0-912d-1d086bfb8d20-kube-api-access-bjmdb\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.859637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7hpkl"] Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.860610 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.864277 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.873558 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hpkl"] Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.922996 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjmdb\" (UniqueName: \"kubernetes.io/projected/b560cafb-e64c-45b0-912d-1d086bfb8d20-kube-api-access-bjmdb\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.923063 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-catalog-content\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.923093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-utilities\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.923487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-utilities\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.923552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b560cafb-e64c-45b0-912d-1d086bfb8d20-catalog-content\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.942241 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjmdb\" (UniqueName: \"kubernetes.io/projected/b560cafb-e64c-45b0-912d-1d086bfb8d20-kube-api-access-bjmdb\") pod \"redhat-marketplace-p9jg2\" (UID: \"b560cafb-e64c-45b0-912d-1d086bfb8d20\") " pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:28 crc kubenswrapper[4813]: I0129 16:34:28.976059 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.024690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-utilities\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.024739 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-catalog-content\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.024769 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfb6\" (UniqueName: \"kubernetes.io/projected/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-kube-api-access-4dfb6\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.126508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-utilities\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.126906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-catalog-content\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.126963 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfb6\" (UniqueName: \"kubernetes.io/projected/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-kube-api-access-4dfb6\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.127201 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-utilities\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.127536 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-catalog-content\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.171464 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfb6\" (UniqueName: \"kubernetes.io/projected/3d9a67ec-1fb1-4442-99b8-c7ee1b729e23-kube-api-access-4dfb6\") pod \"redhat-operators-7hpkl\" (UID: \"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23\") " pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.174525 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.230153 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9jg2"] Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.358843 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hpkl"] Jan 29 16:34:29 crc kubenswrapper[4813]: W0129 16:34:29.373519 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d9a67ec_1fb1_4442_99b8_c7ee1b729e23.slice/crio-9e7b18d071f2df950872f70225654bc66c76a9d4a2f2bd2a37ac03c51537fb52 WatchSource:0}: Error finding container 9e7b18d071f2df950872f70225654bc66c76a9d4a2f2bd2a37ac03c51537fb52: Status 404 returned error can't find the container with id 9e7b18d071f2df950872f70225654bc66c76a9d4a2f2bd2a37ac03c51537fb52 Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.522996 4813 generic.go:334] "Generic (PLEG): container finished" podID="b560cafb-e64c-45b0-912d-1d086bfb8d20" containerID="26db4aaeb5341508863898b1de171082979ee1ae1a73695fc98dd183259d5c60" exitCode=0 Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.523152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9jg2" event={"ID":"b560cafb-e64c-45b0-912d-1d086bfb8d20","Type":"ContainerDied","Data":"26db4aaeb5341508863898b1de171082979ee1ae1a73695fc98dd183259d5c60"} Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.524014 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9jg2" event={"ID":"b560cafb-e64c-45b0-912d-1d086bfb8d20","Type":"ContainerStarted","Data":"eda1a968d14f0314f3ca229144017890d54c07951c89fb199b9fb9fac992cc9e"} Jan 29 16:34:29 crc kubenswrapper[4813]: I0129 16:34:29.527652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hpkl" event={"ID":"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23","Type":"ContainerStarted","Data":"9e7b18d071f2df950872f70225654bc66c76a9d4a2f2bd2a37ac03c51537fb52"} Jan 29 16:34:29 crc kubenswrapper[4813]: E0129 16:34:29.646241 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:34:29 crc kubenswrapper[4813]: E0129 16:34:29.646445 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p9jg2_openshift-marketplace(b560cafb-e64c-45b0-912d-1d086bfb8d20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:29 crc kubenswrapper[4813]: E0129 16:34:29.647647 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:34:30 crc kubenswrapper[4813]: I0129 16:34:30.239994 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:34:30 crc kubenswrapper[4813]: I0129 16:34:30.240051 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:34:30 crc kubenswrapper[4813]: I0129 16:34:30.533502 4813 generic.go:334] "Generic (PLEG): container finished" podID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" containerID="3eb88f325ea1d57fe8c42e8215bbd056768e842a6e45b11bd47f0aa21c0bc0c4" exitCode=0 Jan 29 16:34:30 crc kubenswrapper[4813]: I0129 16:34:30.533568 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hpkl" event={"ID":"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23","Type":"ContainerDied","Data":"3eb88f325ea1d57fe8c42e8215bbd056768e842a6e45b11bd47f0aa21c0bc0c4"} Jan 29 16:34:30 crc kubenswrapper[4813]: E0129 16:34:30.543297 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:34:30 crc kubenswrapper[4813]: E0129 16:34:30.663583 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:34:30 crc kubenswrapper[4813]: E0129 16:34:30.664202 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dfb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7hpkl_openshift-marketplace(3d9a67ec-1fb1-4442-99b8-c7ee1b729e23): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:30 crc kubenswrapper[4813]: E0129 16:34:30.666356 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:34:31 crc kubenswrapper[4813]: E0129 16:34:31.541698 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:34:32 crc kubenswrapper[4813]: I0129 16:34:32.866793 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:34 crc kubenswrapper[4813]: I0129 16:34:34.750074 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:34 crc kubenswrapper[4813]: I0129 16:34:34.770711 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:35 crc kubenswrapper[4813]: I0129 16:34:35.563792 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 16:34:41 crc kubenswrapper[4813]: E0129 16:34:41.388712 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:34:41 crc kubenswrapper[4813]: E0129 16:34:41.389840 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz8nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6m45_openshift-marketplace(4e573ebb-94a9-440d-bba1-58251a12dfb9): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:41 crc kubenswrapper[4813]: E0129 16:34:41.391140 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.362628 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.363082 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dfb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7hpkl_openshift-marketplace(3d9a67ec-1fb1-4442-99b8-c7ee1b729e23): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.364405 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.368721 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.369436 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c24v5_openshift-marketplace(3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:42 crc kubenswrapper[4813]: E0129 16:34:42.370791 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.897357 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4h4mv"] Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.898605 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.901727 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.902017 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.911715 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 16:34:42 crc kubenswrapper[4813]: I0129 16:34:42.911949 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4h4mv"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.011557 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.011772 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerName="controller-manager" containerID="cri-o://d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea" gracePeriod=30 Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.019618 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.019693 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.019922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmfx8\" (UniqueName: \"kubernetes.io/projected/5365bd03-836d-4231-9dde-3b2a3c201e2d-kube-api-access-vmfx8\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.095498 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.095724 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerName="route-controller-manager" containerID="cri-o://a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5" gracePeriod=30 Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.127716 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.127807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.127899 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmfx8\" (UniqueName: \"kubernetes.io/projected/5365bd03-836d-4231-9dde-3b2a3c201e2d-kube-api-access-vmfx8\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.129898 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.137226 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5365bd03-836d-4231-9dde-3b2a3c201e2d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.148672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmfx8\" (UniqueName: \"kubernetes.io/projected/5365bd03-836d-4231-9dde-3b2a3c201e2d-kube-api-access-vmfx8\") pod \"marketplace-operator-79b997595-4h4mv\" (UID: \"5365bd03-836d-4231-9dde-3b2a3c201e2d\") " pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.242809 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.534028 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.582203 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.624693 4813 generic.go:334] "Generic (PLEG): container finished" podID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerID="d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea" exitCode=0 Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.624828 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.624902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" event={"ID":"284b2466-e05a-45dc-af3a-2f36a1409b95","Type":"ContainerDied","Data":"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea"} Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.624949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xsv7r" event={"ID":"284b2466-e05a-45dc-af3a-2f36a1409b95","Type":"ContainerDied","Data":"11ec15a1c052b2a11602cc4f044b59b9106d0b70c311a5d4a73f407fb13d99a0"} Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.624974 4813 scope.go:117] "RemoveContainer" containerID="d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.629418 4813 generic.go:334] "Generic (PLEG): container finished" podID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerID="a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5" exitCode=0 Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.629471 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" event={"ID":"420c9b8a-a626-4eb7-885e-7290574cfc30","Type":"ContainerDied","Data":"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5"} Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.629502 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" event={"ID":"420c9b8a-a626-4eb7-885e-7290574cfc30","Type":"ContainerDied","Data":"5bb99359019f9f7642118007b5341bc12813c7117bbb6cc0629014d06a3932e0"} Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.629560 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.641487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config\") pod \"284b2466-e05a-45dc-af3a-2f36a1409b95\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.641608 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles\") pod \"284b2466-e05a-45dc-af3a-2f36a1409b95\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.641661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb5xg\" (UniqueName: \"kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg\") pod \"284b2466-e05a-45dc-af3a-2f36a1409b95\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.641709 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert\") pod \"284b2466-e05a-45dc-af3a-2f36a1409b95\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.641733 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca\") pod \"284b2466-e05a-45dc-af3a-2f36a1409b95\" (UID: \"284b2466-e05a-45dc-af3a-2f36a1409b95\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.643038 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca" (OuterVolumeSpecName: "client-ca") pod "284b2466-e05a-45dc-af3a-2f36a1409b95" (UID: "284b2466-e05a-45dc-af3a-2f36a1409b95"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.643659 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config" (OuterVolumeSpecName: "config") pod "284b2466-e05a-45dc-af3a-2f36a1409b95" (UID: "284b2466-e05a-45dc-af3a-2f36a1409b95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.644525 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "284b2466-e05a-45dc-af3a-2f36a1409b95" (UID: "284b2466-e05a-45dc-af3a-2f36a1409b95"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.654676 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "284b2466-e05a-45dc-af3a-2f36a1409b95" (UID: "284b2466-e05a-45dc-af3a-2f36a1409b95"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.654860 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg" (OuterVolumeSpecName: "kube-api-access-sb5xg") pod "284b2466-e05a-45dc-af3a-2f36a1409b95" (UID: "284b2466-e05a-45dc-af3a-2f36a1409b95"). InnerVolumeSpecName "kube-api-access-sb5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.662677 4813 scope.go:117] "RemoveContainer" containerID="d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea" Jan 29 16:34:43 crc kubenswrapper[4813]: E0129 16:34:43.663221 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea\": container with ID starting with d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea not found: ID does not exist" containerID="d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.663336 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea"} err="failed to get container status \"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea\": rpc error: code = NotFound desc = could not find container \"d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea\": container with ID starting with d4e3862d29e2e6a8776154a3d74bc450f24c96be4172edbb319b5afe3c6c0fea not found: ID does not exist" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.663364 4813 scope.go:117] "RemoveContainer" containerID="a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.671702 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4h4mv"] Jan 29 16:34:43 crc kubenswrapper[4813]: W0129 16:34:43.678715 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5365bd03_836d_4231_9dde_3b2a3c201e2d.slice/crio-e44655015b3e798f0415338278b836c48b8dd0ef128e958976ad34d44e0d459c WatchSource:0}: Error finding container e44655015b3e798f0415338278b836c48b8dd0ef128e958976ad34d44e0d459c: Status 404 returned error can't find the container with id e44655015b3e798f0415338278b836c48b8dd0ef128e958976ad34d44e0d459c Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.681185 4813 scope.go:117] "RemoveContainer" containerID="a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5" Jan 29 16:34:43 crc kubenswrapper[4813]: E0129 16:34:43.681887 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5\": container with ID starting with a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5 not found: ID does not exist" containerID="a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.681949 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5"} err="failed to get container status \"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5\": rpc error: code = NotFound desc = could not find container \"a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5\": container with ID starting with a96b668fbe7c4a6b24f1ba7b5b9a6e119743ff9d8782c2883a9b771e83ace8d5 not found: ID does not exist" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.742530 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddpw\" (UniqueName: \"kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw\") pod \"420c9b8a-a626-4eb7-885e-7290574cfc30\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.742641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config\") pod \"420c9b8a-a626-4eb7-885e-7290574cfc30\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.742675 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca\") pod \"420c9b8a-a626-4eb7-885e-7290574cfc30\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.742750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert\") pod \"420c9b8a-a626-4eb7-885e-7290574cfc30\" (UID: \"420c9b8a-a626-4eb7-885e-7290574cfc30\") " Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743098 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743138 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743155 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb5xg\" (UniqueName: \"kubernetes.io/projected/284b2466-e05a-45dc-af3a-2f36a1409b95-kube-api-access-sb5xg\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743191 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/284b2466-e05a-45dc-af3a-2f36a1409b95-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743204 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/284b2466-e05a-45dc-af3a-2f36a1409b95-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743705 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca" (OuterVolumeSpecName: "client-ca") pod "420c9b8a-a626-4eb7-885e-7290574cfc30" (UID: "420c9b8a-a626-4eb7-885e-7290574cfc30"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.743733 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config" (OuterVolumeSpecName: "config") pod "420c9b8a-a626-4eb7-885e-7290574cfc30" (UID: "420c9b8a-a626-4eb7-885e-7290574cfc30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.747338 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw" (OuterVolumeSpecName: "kube-api-access-nddpw") pod "420c9b8a-a626-4eb7-885e-7290574cfc30" (UID: "420c9b8a-a626-4eb7-885e-7290574cfc30"). InnerVolumeSpecName "kube-api-access-nddpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.747645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "420c9b8a-a626-4eb7-885e-7290574cfc30" (UID: "420c9b8a-a626-4eb7-885e-7290574cfc30"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.844798 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/420c9b8a-a626-4eb7-885e-7290574cfc30-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.844854 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddpw\" (UniqueName: \"kubernetes.io/projected/420c9b8a-a626-4eb7-885e-7290574cfc30-kube-api-access-nddpw\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.844868 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.844878 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/420c9b8a-a626-4eb7-885e-7290574cfc30-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.963007 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.967940 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xsv7r"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.975971 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:34:43 crc kubenswrapper[4813]: I0129 16:34:43.983981 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dr774"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.191906 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsnht"] Jan 29 16:34:44 crc kubenswrapper[4813]: E0129 16:34:44.192145 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerName="controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.192158 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerName="controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: E0129 16:34:44.192169 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerName="route-controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.192174 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerName="route-controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.192261 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" containerName="route-controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.192275 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" containerName="controller-manager" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.192656 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.228057 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsnht"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.255734 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284b2466-e05a-45dc-af3a-2f36a1409b95" path="/var/lib/kubelet/pods/284b2466-e05a-45dc-af3a-2f36a1409b95/volumes" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.256321 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420c9b8a-a626-4eb7-885e-7290574cfc30" path="/var/lib/kubelet/pods/420c9b8a-a626-4eb7-885e-7290574cfc30/volumes" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.349982 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3590d5c1-234e-4377-9c66-338bb367f757-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlwr\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-kube-api-access-tjlwr\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350487 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-registry-tls\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350663 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-registry-certificates\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350691 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-bound-sa-token\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-trusted-ca\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.350743 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3590d5c1-234e-4377-9c66-338bb367f757-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: E0129 16:34:44.362144 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:34:44 crc kubenswrapper[4813]: E0129 16:34:44.362297 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p9jg2_openshift-marketplace(b560cafb-e64c-45b0-912d-1d086bfb8d20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:44 crc kubenswrapper[4813]: E0129 16:34:44.363393 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.386449 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452481 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3590d5c1-234e-4377-9c66-338bb367f757-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3590d5c1-234e-4377-9c66-338bb367f757-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452595 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlwr\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-kube-api-access-tjlwr\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-registry-tls\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452695 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-registry-certificates\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.452751 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-bound-sa-token\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.453058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3590d5c1-234e-4377-9c66-338bb367f757-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.454153 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-registry-certificates\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.454214 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-trusted-ca\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.455140 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3590d5c1-234e-4377-9c66-338bb367f757-trusted-ca\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.458705 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3590d5c1-234e-4377-9c66-338bb367f757-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.463723 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-registry-tls\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.479747 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlwr\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-kube-api-access-tjlwr\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.482599 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3590d5c1-234e-4377-9c66-338bb367f757-bound-sa-token\") pod \"image-registry-66df7c8f76-tsnht\" (UID: \"3590d5c1-234e-4377-9c66-338bb367f757\") " pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.507932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.614466 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.615517 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.618265 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.619132 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.620181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.621992 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.622577 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625025 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625128 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625472 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625658 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625698 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625721 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625796 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.625945 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.626982 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.638537 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.654299 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" event={"ID":"5365bd03-836d-4231-9dde-3b2a3c201e2d","Type":"ContainerStarted","Data":"c8b8924809243c4005333d7b7e3e13102dfe9395c87e57a576fd08f040dda34c"} Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.654387 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" event={"ID":"5365bd03-836d-4231-9dde-3b2a3c201e2d","Type":"ContainerStarted","Data":"e44655015b3e798f0415338278b836c48b8dd0ef128e958976ad34d44e0d459c"} Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.654776 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.655165 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.661280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.664547 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.694222 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4h4mv" podStartSLOduration=2.694192665 podStartE2EDuration="2.694192665s" podCreationTimestamp="2026-01-29 16:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:44.680566367 +0000 UTC m=+337.167769593" watchObservedRunningTime="2026-01-29 16:34:44.694192665 +0000 UTC m=+337.181395901" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758276 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758354 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdggc\" (UniqueName: \"kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758396 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758507 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48lqd\" (UniqueName: \"kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.758587 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862130 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862697 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdggc\" (UniqueName: \"kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862856 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862874 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862894 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.862933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48lqd\" (UniqueName: \"kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.864822 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.865065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.865227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.865380 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.866801 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.868797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.873964 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.880170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48lqd\" (UniqueName: \"kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd\") pod \"route-controller-manager-7cc7864974-pwwqd\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.883105 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdggc\" (UniqueName: \"kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc\") pod \"controller-manager-cd54bb676-l7tc6\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.950299 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.963249 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tsnht"] Jan 29 16:34:44 crc kubenswrapper[4813]: W0129 16:34:44.964206 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3590d5c1_234e_4377_9c66_338bb367f757.slice/crio-5a93809e4df6dbbc7a46dc6be31612e5c3dadd2e058b6e970bba3406e31c6093 WatchSource:0}: Error finding container 5a93809e4df6dbbc7a46dc6be31612e5c3dadd2e058b6e970bba3406e31c6093: Status 404 returned error can't find the container with id 5a93809e4df6dbbc7a46dc6be31612e5c3dadd2e058b6e970bba3406e31c6093 Jan 29 16:34:44 crc kubenswrapper[4813]: I0129 16:34:44.966517 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.382826 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.455592 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.682021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" event={"ID":"beefb936-522a-47a8-817e-c60f324f0937","Type":"ContainerStarted","Data":"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.682091 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" event={"ID":"beefb936-522a-47a8-817e-c60f324f0937","Type":"ContainerStarted","Data":"a6007b57fffc223877eee80948e401d73a8182fa5aa9dbe0ab2e813a2c5944e3"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.682328 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.686741 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" event={"ID":"be8c71e9-470d-4155-9f54-0e0285561d04","Type":"ContainerStarted","Data":"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.686820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" event={"ID":"be8c71e9-470d-4155-9f54-0e0285561d04","Type":"ContainerStarted","Data":"0e6dce9afbdbae56e2a8c8b55f63eb64b91c938eaaa646abe5599062d3b1b384"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.686952 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.689686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" event={"ID":"3590d5c1-234e-4377-9c66-338bb367f757","Type":"ContainerStarted","Data":"7363e0dcd340a0e3f099d09dbab9cb2be253f5efc59d4450ddce72ce78f01bce"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.689730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" event={"ID":"3590d5c1-234e-4377-9c66-338bb367f757","Type":"ContainerStarted","Data":"5a93809e4df6dbbc7a46dc6be31612e5c3dadd2e058b6e970bba3406e31c6093"} Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.689849 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.690455 4813 patch_prober.go:28] interesting pod/controller-manager-cd54bb676-l7tc6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.690520 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.702242 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" podStartSLOduration=2.702215989 podStartE2EDuration="2.702215989s" podCreationTimestamp="2026-01-29 16:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:45.700709453 +0000 UTC m=+338.187912669" watchObservedRunningTime="2026-01-29 16:34:45.702215989 +0000 UTC m=+338.189419215" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.728835 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" podStartSLOduration=2.728819275 podStartE2EDuration="2.728819275s" podCreationTimestamp="2026-01-29 16:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:45.722618675 +0000 UTC m=+338.209821891" watchObservedRunningTime="2026-01-29 16:34:45.728819275 +0000 UTC m=+338.216022491" Jan 29 16:34:45 crc kubenswrapper[4813]: I0129 16:34:45.746299 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" podStartSLOduration=1.746282371 podStartE2EDuration="1.746282371s" podCreationTimestamp="2026-01-29 16:34:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:45.744063432 +0000 UTC m=+338.231266648" watchObservedRunningTime="2026-01-29 16:34:45.746282371 +0000 UTC m=+338.233485587" Jan 29 16:34:46 crc kubenswrapper[4813]: I0129 16:34:46.108002 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:46 crc kubenswrapper[4813]: I0129 16:34:46.701668 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:52 crc kubenswrapper[4813]: I0129 16:34:52.848871 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:52 crc kubenswrapper[4813]: I0129 16:34:52.849766 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" containerName="controller-manager" containerID="cri-o://5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b" gracePeriod=30 Jan 29 16:34:52 crc kubenswrapper[4813]: I0129 16:34:52.875358 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:52 crc kubenswrapper[4813]: I0129 16:34:52.875684 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" podUID="beefb936-522a-47a8-817e-c60f324f0937" containerName="route-controller-manager" containerID="cri-o://b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937" gracePeriod=30 Jan 29 16:34:53 crc kubenswrapper[4813]: E0129 16:34:53.242185 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.432307 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.444285 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config\") pod \"be8c71e9-470d-4155-9f54-0e0285561d04\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600560 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert\") pod \"be8c71e9-470d-4155-9f54-0e0285561d04\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600588 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config\") pod \"beefb936-522a-47a8-817e-c60f324f0937\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca\") pod \"beefb936-522a-47a8-817e-c60f324f0937\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48lqd\" (UniqueName: \"kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd\") pod \"beefb936-522a-47a8-817e-c60f324f0937\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert\") pod \"beefb936-522a-47a8-817e-c60f324f0937\" (UID: \"beefb936-522a-47a8-817e-c60f324f0937\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles\") pod \"be8c71e9-470d-4155-9f54-0e0285561d04\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdggc\" (UniqueName: \"kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc\") pod \"be8c71e9-470d-4155-9f54-0e0285561d04\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.600889 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca\") pod \"be8c71e9-470d-4155-9f54-0e0285561d04\" (UID: \"be8c71e9-470d-4155-9f54-0e0285561d04\") " Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.601968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca" (OuterVolumeSpecName: "client-ca") pod "be8c71e9-470d-4155-9f54-0e0285561d04" (UID: "be8c71e9-470d-4155-9f54-0e0285561d04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.602009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config" (OuterVolumeSpecName: "config") pod "be8c71e9-470d-4155-9f54-0e0285561d04" (UID: "be8c71e9-470d-4155-9f54-0e0285561d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.602490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "be8c71e9-470d-4155-9f54-0e0285561d04" (UID: "be8c71e9-470d-4155-9f54-0e0285561d04"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.602717 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca" (OuterVolumeSpecName: "client-ca") pod "beefb936-522a-47a8-817e-c60f324f0937" (UID: "beefb936-522a-47a8-817e-c60f324f0937"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.602788 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config" (OuterVolumeSpecName: "config") pod "beefb936-522a-47a8-817e-c60f324f0937" (UID: "beefb936-522a-47a8-817e-c60f324f0937"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.608406 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc" (OuterVolumeSpecName: "kube-api-access-bdggc") pod "be8c71e9-470d-4155-9f54-0e0285561d04" (UID: "be8c71e9-470d-4155-9f54-0e0285561d04"). InnerVolumeSpecName "kube-api-access-bdggc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.608404 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "beefb936-522a-47a8-817e-c60f324f0937" (UID: "beefb936-522a-47a8-817e-c60f324f0937"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.608475 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd" (OuterVolumeSpecName: "kube-api-access-48lqd") pod "beefb936-522a-47a8-817e-c60f324f0937" (UID: "beefb936-522a-47a8-817e-c60f324f0937"). InnerVolumeSpecName "kube-api-access-48lqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.608518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "be8c71e9-470d-4155-9f54-0e0285561d04" (UID: "be8c71e9-470d-4155-9f54-0e0285561d04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702478 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdggc\" (UniqueName: \"kubernetes.io/projected/be8c71e9-470d-4155-9f54-0e0285561d04-kube-api-access-bdggc\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702528 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702545 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702555 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8c71e9-470d-4155-9f54-0e0285561d04-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702569 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702577 4813 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/beefb936-522a-47a8-817e-c60f324f0937-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702586 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48lqd\" (UniqueName: \"kubernetes.io/projected/beefb936-522a-47a8-817e-c60f324f0937-kube-api-access-48lqd\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702594 4813 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/beefb936-522a-47a8-817e-c60f324f0937-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.702603 4813 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be8c71e9-470d-4155-9f54-0e0285561d04-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.740483 4813 generic.go:334] "Generic (PLEG): container finished" podID="be8c71e9-470d-4155-9f54-0e0285561d04" containerID="5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b" exitCode=0 Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.740564 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" event={"ID":"be8c71e9-470d-4155-9f54-0e0285561d04","Type":"ContainerDied","Data":"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b"} Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.740608 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.740652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd54bb676-l7tc6" event={"ID":"be8c71e9-470d-4155-9f54-0e0285561d04","Type":"ContainerDied","Data":"0e6dce9afbdbae56e2a8c8b55f63eb64b91c938eaaa646abe5599062d3b1b384"} Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.740714 4813 scope.go:117] "RemoveContainer" containerID="5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.743572 4813 generic.go:334] "Generic (PLEG): container finished" podID="beefb936-522a-47a8-817e-c60f324f0937" containerID="b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937" exitCode=0 Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.743628 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" event={"ID":"beefb936-522a-47a8-817e-c60f324f0937","Type":"ContainerDied","Data":"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937"} Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.743656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" event={"ID":"beefb936-522a-47a8-817e-c60f324f0937","Type":"ContainerDied","Data":"a6007b57fffc223877eee80948e401d73a8182fa5aa9dbe0ab2e813a2c5944e3"} Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.743723 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.770645 4813 scope.go:117] "RemoveContainer" containerID="5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b" Jan 29 16:34:53 crc kubenswrapper[4813]: E0129 16:34:53.771918 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b\": container with ID starting with 5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b not found: ID does not exist" containerID="5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.773080 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b"} err="failed to get container status \"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b\": rpc error: code = NotFound desc = could not find container \"5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b\": container with ID starting with 5d8466e717ebacee5e51debf1fa9fddc98fb9e18b836462d7598b36be10eef4b not found: ID does not exist" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.773163 4813 scope.go:117] "RemoveContainer" containerID="b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.783686 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.788382 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cd54bb676-l7tc6"] Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.799055 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.799444 4813 scope.go:117] "RemoveContainer" containerID="b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937" Jan 29 16:34:53 crc kubenswrapper[4813]: E0129 16:34:53.800140 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937\": container with ID starting with b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937 not found: ID does not exist" containerID="b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.800236 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937"} err="failed to get container status \"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937\": rpc error: code = NotFound desc = could not find container \"b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937\": container with ID starting with b126ed621272a36798213f3306a81f3ebfc4803eb399fc8ca0716ed6b25b8937 not found: ID does not exist" Jan 29 16:34:53 crc kubenswrapper[4813]: I0129 16:34:53.804740 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cc7864974-pwwqd"] Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.246907 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" path="/var/lib/kubelet/pods/be8c71e9-470d-4155-9f54-0e0285561d04/volumes" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.247748 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beefb936-522a-47a8-817e-c60f324f0937" path="/var/lib/kubelet/pods/beefb936-522a-47a8-817e-c60f324f0937/volumes" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.637201 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk"] Jan 29 16:34:54 crc kubenswrapper[4813]: E0129 16:34:54.637548 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beefb936-522a-47a8-817e-c60f324f0937" containerName="route-controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.637564 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="beefb936-522a-47a8-817e-c60f324f0937" containerName="route-controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: E0129 16:34:54.637584 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" containerName="controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.637593 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" containerName="controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.637714 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="beefb936-522a-47a8-817e-c60f324f0937" containerName="route-controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.637734 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="be8c71e9-470d-4155-9f54-0e0285561d04" containerName="controller-manager" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.638283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.641218 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.641314 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.641480 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.642162 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.642601 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c4cf48478-g7lgh"] Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.643023 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.643155 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.644028 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.645441 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.645664 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.646586 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.646607 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.646833 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.647813 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.649347 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk"] Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.681300 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.670489 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4cf48478-g7lgh"] Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.716446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1602e8eb-7749-4075-9c10-5999d9e3bf41-serving-cert\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.716506 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-client-ca\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.716552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cw5c\" (UniqueName: \"kubernetes.io/projected/1602e8eb-7749-4075-9c10-5999d9e3bf41-kube-api-access-8cw5c\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.716583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-config\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7m8\" (UniqueName: \"kubernetes.io/projected/743e319d-0d51-44db-b6a7-c2843a015be3-kube-api-access-cf7m8\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1602e8eb-7749-4075-9c10-5999d9e3bf41-serving-cert\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-client-ca\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-proxy-ca-bundles\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-config\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818776 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743e319d-0d51-44db-b6a7-c2843a015be3-serving-cert\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818803 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cw5c\" (UniqueName: \"kubernetes.io/projected/1602e8eb-7749-4075-9c10-5999d9e3bf41-kube-api-access-8cw5c\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818828 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-client-ca\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.818854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-config\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.819587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-client-ca\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.820301 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1602e8eb-7749-4075-9c10-5999d9e3bf41-config\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.828041 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1602e8eb-7749-4075-9c10-5999d9e3bf41-serving-cert\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.841317 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cw5c\" (UniqueName: \"kubernetes.io/projected/1602e8eb-7749-4075-9c10-5999d9e3bf41-kube-api-access-8cw5c\") pod \"route-controller-manager-f7f49bffd-g65pk\" (UID: \"1602e8eb-7749-4075-9c10-5999d9e3bf41\") " pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.921276 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-proxy-ca-bundles\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.921367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-config\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.921428 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743e319d-0d51-44db-b6a7-c2843a015be3-serving-cert\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.921461 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-client-ca\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.921531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7m8\" (UniqueName: \"kubernetes.io/projected/743e319d-0d51-44db-b6a7-c2843a015be3-kube-api-access-cf7m8\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.923076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-client-ca\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.923512 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-proxy-ca-bundles\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.924786 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743e319d-0d51-44db-b6a7-c2843a015be3-config\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.926815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/743e319d-0d51-44db-b6a7-c2843a015be3-serving-cert\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.941035 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7m8\" (UniqueName: \"kubernetes.io/projected/743e319d-0d51-44db-b6a7-c2843a015be3-kube-api-access-cf7m8\") pod \"controller-manager-c4cf48478-g7lgh\" (UID: \"743e319d-0d51-44db-b6a7-c2843a015be3\") " pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:54 crc kubenswrapper[4813]: I0129 16:34:54.972654 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.003601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.176798 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk"] Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.254735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c4cf48478-g7lgh"] Jan 29 16:34:55 crc kubenswrapper[4813]: W0129 16:34:55.280652 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod743e319d_0d51_44db_b6a7_c2843a015be3.slice/crio-3c6e9c938e6a4d0ee68936d7e25496bfb8d8d09bdcfc1ce5fe8055494d18168b WatchSource:0}: Error finding container 3c6e9c938e6a4d0ee68936d7e25496bfb8d8d09bdcfc1ce5fe8055494d18168b: Status 404 returned error can't find the container with id 3c6e9c938e6a4d0ee68936d7e25496bfb8d8d09bdcfc1ce5fe8055494d18168b Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.759205 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" event={"ID":"1602e8eb-7749-4075-9c10-5999d9e3bf41","Type":"ContainerStarted","Data":"ad23d8a3228acb15d184d155775bf1253c1ed4888501987427aa69e38c397c71"} Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.759246 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" event={"ID":"1602e8eb-7749-4075-9c10-5999d9e3bf41","Type":"ContainerStarted","Data":"c99464b096ff60feaa2b2563ad23c04334b73a2c9c737841bbe5d3ee08590876"} Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.761220 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.763917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" event={"ID":"743e319d-0d51-44db-b6a7-c2843a015be3","Type":"ContainerStarted","Data":"04d6f394316a9767bf0b9dc37c48a6cd28b908fe22772ac06f0610f3b2354e4d"} Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.763941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" event={"ID":"743e319d-0d51-44db-b6a7-c2843a015be3","Type":"ContainerStarted","Data":"3c6e9c938e6a4d0ee68936d7e25496bfb8d8d09bdcfc1ce5fe8055494d18168b"} Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.765136 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.770033 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" Jan 29 16:34:55 crc kubenswrapper[4813]: I0129 16:34:55.775618 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" podStartSLOduration=3.775607574 podStartE2EDuration="3.775607574s" podCreationTimestamp="2026-01-29 16:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:55.774387817 +0000 UTC m=+348.261591033" watchObservedRunningTime="2026-01-29 16:34:55.775607574 +0000 UTC m=+348.262810790" Jan 29 16:34:56 crc kubenswrapper[4813]: I0129 16:34:56.040559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f7f49bffd-g65pk" Jan 29 16:34:56 crc kubenswrapper[4813]: I0129 16:34:56.060859 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c4cf48478-g7lgh" podStartSLOduration=4.060840139 podStartE2EDuration="4.060840139s" podCreationTimestamp="2026-01-29 16:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:34:55.808025948 +0000 UTC m=+348.295229174" watchObservedRunningTime="2026-01-29 16:34:56.060840139 +0000 UTC m=+348.548043355" Jan 29 16:34:56 crc kubenswrapper[4813]: E0129 16:34:56.241005 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:34:57 crc kubenswrapper[4813]: E0129 16:34:57.247596 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:34:58 crc kubenswrapper[4813]: E0129 16:34:58.245472 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:35:00 crc kubenswrapper[4813]: I0129 16:35:00.240028 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:35:00 crc kubenswrapper[4813]: I0129 16:35:00.240484 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:04 crc kubenswrapper[4813]: I0129 16:35:04.515780 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tsnht" Jan 29 16:35:04 crc kubenswrapper[4813]: I0129 16:35:04.580283 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:35:08 crc kubenswrapper[4813]: E0129 16:35:08.414443 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:35:08 crc kubenswrapper[4813]: E0129 16:35:08.414793 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dfb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7hpkl_openshift-marketplace(3d9a67ec-1fb1-4442-99b8-c7ee1b729e23): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:35:08 crc kubenswrapper[4813]: E0129 16:35:08.415962 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:35:09 crc kubenswrapper[4813]: E0129 16:35:09.362593 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:35:09 crc kubenswrapper[4813]: E0129 16:35:09.362940 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz8nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6m45_openshift-marketplace(4e573ebb-94a9-440d-bba1-58251a12dfb9): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:35:09 crc kubenswrapper[4813]: E0129 16:35:09.364235 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.366682 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.367178 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c24v5_openshift-marketplace(3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.368135 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.368280 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p9jg2_openshift-marketplace(b560cafb-e64c-45b0-912d-1d086bfb8d20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.368320 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:35:12 crc kubenswrapper[4813]: E0129 16:35:12.370238 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:35:19 crc kubenswrapper[4813]: E0129 16:35:19.244711 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:35:23 crc kubenswrapper[4813]: E0129 16:35:23.242062 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:35:24 crc kubenswrapper[4813]: E0129 16:35:24.242296 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:35:24 crc kubenswrapper[4813]: E0129 16:35:24.242346 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:35:29 crc kubenswrapper[4813]: I0129 16:35:29.622658 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" podUID="90fcf277-fd30-4c95-80b6-4c5199172c6d" containerName="registry" containerID="cri-o://8caeed8ba7b049001659a2f83973706ff5c460c6dfd4bca4675c88edd36afc97" gracePeriod=30 Jan 29 16:35:29 crc kubenswrapper[4813]: I0129 16:35:29.946369 4813 generic.go:334] "Generic (PLEG): container finished" podID="90fcf277-fd30-4c95-80b6-4c5199172c6d" containerID="8caeed8ba7b049001659a2f83973706ff5c460c6dfd4bca4675c88edd36afc97" exitCode=0 Jan 29 16:35:29 crc kubenswrapper[4813]: I0129 16:35:29.946454 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" event={"ID":"90fcf277-fd30-4c95-80b6-4c5199172c6d","Type":"ContainerDied","Data":"8caeed8ba7b049001659a2f83973706ff5c460c6dfd4bca4675c88edd36afc97"} Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.059127 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235177 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235302 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w62ln\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235341 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235411 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.235477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca\") pod \"90fcf277-fd30-4c95-80b6-4c5199172c6d\" (UID: \"90fcf277-fd30-4c95-80b6-4c5199172c6d\") " Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.236294 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.236631 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.241754 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.241804 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.242593 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln" (OuterVolumeSpecName: "kube-api-access-w62ln") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "kube-api-access-w62ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.246127 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: E0129 16:35:30.246572 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.247966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.248567 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.252806 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.271243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "90fcf277-fd30-4c95-80b6-4c5199172c6d" (UID: "90fcf277-fd30-4c95-80b6-4c5199172c6d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.279405 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.280826 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.280930 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0" gracePeriod=600 Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337013 4813 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337078 4813 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/90fcf277-fd30-4c95-80b6-4c5199172c6d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337102 4813 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337158 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w62ln\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-kube-api-access-w62ln\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337182 4813 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/90fcf277-fd30-4c95-80b6-4c5199172c6d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337206 4813 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/90fcf277-fd30-4c95-80b6-4c5199172c6d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.337227 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90fcf277-fd30-4c95-80b6-4c5199172c6d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.954055 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0" exitCode=0 Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.954691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0"} Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.954722 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41"} Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.954740 4813 scope.go:117] "RemoveContainer" containerID="0b4438d7c2c64143f685cc155c72297b681e31a1c993685dedbd107bfbd1873b" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.957636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" event={"ID":"90fcf277-fd30-4c95-80b6-4c5199172c6d","Type":"ContainerDied","Data":"b5170c4907b295092e73e0a07e199411353d328db8761160dba5ee875933e6b8"} Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.957769 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cmdkm" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.978023 4813 scope.go:117] "RemoveContainer" containerID="8caeed8ba7b049001659a2f83973706ff5c460c6dfd4bca4675c88edd36afc97" Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.993160 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:35:30 crc kubenswrapper[4813]: I0129 16:35:30.998222 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cmdkm"] Jan 29 16:35:32 crc kubenswrapper[4813]: I0129 16:35:32.246080 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fcf277-fd30-4c95-80b6-4c5199172c6d" path="/var/lib/kubelet/pods/90fcf277-fd30-4c95-80b6-4c5199172c6d/volumes" Jan 29 16:35:35 crc kubenswrapper[4813]: E0129 16:35:35.242290 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:35:36 crc kubenswrapper[4813]: E0129 16:35:36.244494 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:35:39 crc kubenswrapper[4813]: E0129 16:35:39.242345 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:35:42 crc kubenswrapper[4813]: E0129 16:35:42.242398 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7hpkl" podUID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" Jan 29 16:35:47 crc kubenswrapper[4813]: E0129 16:35:47.242696 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:35:48 crc kubenswrapper[4813]: E0129 16:35:48.248501 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:35:50 crc kubenswrapper[4813]: E0129 16:35:50.242232 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:36:01 crc kubenswrapper[4813]: E0129 16:36:01.623476 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:36:01 crc kubenswrapper[4813]: E0129 16:36:01.624139 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p9jg2_openshift-marketplace(b560cafb-e64c-45b0-912d-1d086bfb8d20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:36:01 crc kubenswrapper[4813]: E0129 16:36:01.625268 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:36:05 crc kubenswrapper[4813]: I0129 16:36:05.143567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hpkl" event={"ID":"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23","Type":"ContainerStarted","Data":"02345d19258b66e7d1cd3f69019c0ded3d6794e48a429c071bac4996e6cd0083"} Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.355972 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.356362 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c24v5_openshift-marketplace(3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.357631 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.615500 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.615695 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz8nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6m45_openshift-marketplace(4e573ebb-94a9-440d-bba1-58251a12dfb9): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:36:05 crc kubenswrapper[4813]: E0129 16:36:05.616942 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:36:06 crc kubenswrapper[4813]: I0129 16:36:06.151571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hpkl" event={"ID":"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23","Type":"ContainerDied","Data":"02345d19258b66e7d1cd3f69019c0ded3d6794e48a429c071bac4996e6cd0083"} Jan 29 16:36:06 crc kubenswrapper[4813]: I0129 16:36:06.151518 4813 generic.go:334] "Generic (PLEG): container finished" podID="3d9a67ec-1fb1-4442-99b8-c7ee1b729e23" containerID="02345d19258b66e7d1cd3f69019c0ded3d6794e48a429c071bac4996e6cd0083" exitCode=0 Jan 29 16:36:10 crc kubenswrapper[4813]: I0129 16:36:10.178465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hpkl" event={"ID":"3d9a67ec-1fb1-4442-99b8-c7ee1b729e23","Type":"ContainerStarted","Data":"ecc8f28799d259ac81982740419811e93fba93bb6cac277fa59e14d9daa048f4"} Jan 29 16:36:10 crc kubenswrapper[4813]: I0129 16:36:10.198816 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7hpkl" podStartSLOduration=3.084395539 podStartE2EDuration="1m42.19879751s" podCreationTimestamp="2026-01-29 16:34:28 +0000 UTC" firstStartedPulling="2026-01-29 16:34:30.543616811 +0000 UTC m=+323.030820017" lastFinishedPulling="2026-01-29 16:36:09.658018762 +0000 UTC m=+422.145221988" observedRunningTime="2026-01-29 16:36:10.195306107 +0000 UTC m=+422.682509323" watchObservedRunningTime="2026-01-29 16:36:10.19879751 +0000 UTC m=+422.686000726" Jan 29 16:36:18 crc kubenswrapper[4813]: E0129 16:36:17.242330 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:36:18 crc kubenswrapper[4813]: E0129 16:36:17.242957 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:36:18 crc kubenswrapper[4813]: E0129 16:36:17.243768 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:36:19 crc kubenswrapper[4813]: I0129 16:36:19.176223 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:36:19 crc kubenswrapper[4813]: I0129 16:36:19.176767 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:36:19 crc kubenswrapper[4813]: I0129 16:36:19.327515 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:36:19 crc kubenswrapper[4813]: I0129 16:36:19.371658 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7hpkl" Jan 29 16:36:32 crc kubenswrapper[4813]: E0129 16:36:32.245898 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:36:32 crc kubenswrapper[4813]: E0129 16:36:32.246040 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:36:32 crc kubenswrapper[4813]: E0129 16:36:32.246147 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:36:43 crc kubenswrapper[4813]: E0129 16:36:43.242944 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:36:44 crc kubenswrapper[4813]: E0129 16:36:44.242025 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:36:47 crc kubenswrapper[4813]: E0129 16:36:47.241246 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:36:55 crc kubenswrapper[4813]: E0129 16:36:55.241855 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:36:56 crc kubenswrapper[4813]: E0129 16:36:56.243240 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:36:58 crc kubenswrapper[4813]: E0129 16:36:58.245654 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:37:09 crc kubenswrapper[4813]: E0129 16:37:09.245387 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:37:11 crc kubenswrapper[4813]: E0129 16:37:11.241493 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:37:13 crc kubenswrapper[4813]: E0129 16:37:13.241882 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:37:21 crc kubenswrapper[4813]: E0129 16:37:21.241360 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:37:24 crc kubenswrapper[4813]: E0129 16:37:24.242174 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:37:26 crc kubenswrapper[4813]: I0129 16:37:26.240846 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:37:26 crc kubenswrapper[4813]: E0129 16:37:26.371524 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:37:26 crc kubenswrapper[4813]: E0129 16:37:26.371708 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjmdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p9jg2_openshift-marketplace(b560cafb-e64c-45b0-912d-1d086bfb8d20): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:26 crc kubenswrapper[4813]: E0129 16:37:26.372932 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:37:30 crc kubenswrapper[4813]: I0129 16:37:30.239783 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:37:30 crc kubenswrapper[4813]: I0129 16:37:30.240175 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:37:33 crc kubenswrapper[4813]: E0129 16:37:33.417554 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:37:33 crc kubenswrapper[4813]: E0129 16:37:33.418178 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz8nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m6m45_openshift-marketplace(4e573ebb-94a9-440d-bba1-58251a12dfb9): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:33 crc kubenswrapper[4813]: E0129 16:37:33.419430 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:37:37 crc kubenswrapper[4813]: E0129 16:37:37.363788 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:37:37 crc kubenswrapper[4813]: E0129 16:37:37.364429 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bc62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c24v5_openshift-marketplace(3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:37:37 crc kubenswrapper[4813]: E0129 16:37:37.366142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:37:40 crc kubenswrapper[4813]: E0129 16:37:40.241104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:37:47 crc kubenswrapper[4813]: E0129 16:37:47.244681 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:37:52 crc kubenswrapper[4813]: E0129 16:37:52.242245 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:37:53 crc kubenswrapper[4813]: E0129 16:37:53.241511 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:37:59 crc kubenswrapper[4813]: E0129 16:37:59.245359 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:38:00 crc kubenswrapper[4813]: I0129 16:38:00.240339 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:38:00 crc kubenswrapper[4813]: I0129 16:38:00.240679 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:38:04 crc kubenswrapper[4813]: E0129 16:38:04.242379 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:38:06 crc kubenswrapper[4813]: E0129 16:38:06.243981 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:38:11 crc kubenswrapper[4813]: E0129 16:38:11.242471 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:38:19 crc kubenswrapper[4813]: E0129 16:38:19.242989 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:38:20 crc kubenswrapper[4813]: E0129 16:38:20.242666 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:38:24 crc kubenswrapper[4813]: E0129 16:38:24.241365 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:38:30 crc kubenswrapper[4813]: I0129 16:38:30.240609 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:38:30 crc kubenswrapper[4813]: I0129 16:38:30.241222 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:38:30 crc kubenswrapper[4813]: I0129 16:38:30.246831 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:38:30 crc kubenswrapper[4813]: I0129 16:38:30.247334 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:38:30 crc kubenswrapper[4813]: I0129 16:38:30.247396 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41" gracePeriod=600 Jan 29 16:38:31 crc kubenswrapper[4813]: I0129 16:38:31.144157 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41" exitCode=0 Jan 29 16:38:31 crc kubenswrapper[4813]: I0129 16:38:31.144343 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41"} Jan 29 16:38:31 crc kubenswrapper[4813]: I0129 16:38:31.144719 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e"} Jan 29 16:38:31 crc kubenswrapper[4813]: I0129 16:38:31.144757 4813 scope.go:117] "RemoveContainer" containerID="ba95b516b2c9c5d21eb0ef2cc39ecb780699f7b9dbcd2fa1b5115bbf3254bee0" Jan 29 16:38:32 crc kubenswrapper[4813]: E0129 16:38:32.241899 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:38:32 crc kubenswrapper[4813]: E0129 16:38:32.241933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:38:39 crc kubenswrapper[4813]: E0129 16:38:39.245351 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:38:43 crc kubenswrapper[4813]: E0129 16:38:43.241908 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:38:45 crc kubenswrapper[4813]: E0129 16:38:45.242133 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:38:53 crc kubenswrapper[4813]: E0129 16:38:53.242853 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:38:56 crc kubenswrapper[4813]: E0129 16:38:56.240819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:38:56 crc kubenswrapper[4813]: E0129 16:38:56.243076 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:39:04 crc kubenswrapper[4813]: E0129 16:39:04.242553 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:39:09 crc kubenswrapper[4813]: E0129 16:39:09.243007 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:39:09 crc kubenswrapper[4813]: E0129 16:39:09.243042 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:39:19 crc kubenswrapper[4813]: E0129 16:39:19.245302 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:39:21 crc kubenswrapper[4813]: E0129 16:39:21.241158 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:39:21 crc kubenswrapper[4813]: E0129 16:39:21.241453 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:39:32 crc kubenswrapper[4813]: E0129 16:39:32.243731 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:39:33 crc kubenswrapper[4813]: E0129 16:39:33.241652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:39:36 crc kubenswrapper[4813]: E0129 16:39:36.242910 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:39:43 crc kubenswrapper[4813]: E0129 16:39:43.242458 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:39:45 crc kubenswrapper[4813]: E0129 16:39:45.240845 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:39:47 crc kubenswrapper[4813]: E0129 16:39:47.241007 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:39:57 crc kubenswrapper[4813]: E0129 16:39:57.242831 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:39:57 crc kubenswrapper[4813]: E0129 16:39:57.242885 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:40:02 crc kubenswrapper[4813]: E0129 16:40:02.242361 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p9jg2" podUID="b560cafb-e64c-45b0-912d-1d086bfb8d20" Jan 29 16:40:11 crc kubenswrapper[4813]: E0129 16:40:11.241982 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" Jan 29 16:40:11 crc kubenswrapper[4813]: E0129 16:40:11.242179 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m6m45" podUID="4e573ebb-94a9-440d-bba1-58251a12dfb9" Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.716490 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj2h6"] Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.717690 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-controller" containerID="cri-o://f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719281 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="sbdb" containerID="cri-o://d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719331 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="nbdb" containerID="cri-o://b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719365 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="northd" containerID="cri-o://e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719401 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719430 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-node" containerID="cri-o://4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.719460 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-acl-logging" containerID="cri-o://03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74" gracePeriod=30 Jan 29 16:40:18 crc kubenswrapper[4813]: I0129 16:40:18.755827 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" containerID="cri-o://4b1ad71bda3ef6253797677910c06ead6fe138502b74dd7e52b23bd3f946c8eb" gracePeriod=30 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.723318 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/2.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.724468 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/1.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.724512 4813 generic.go:334] "Generic (PLEG): container finished" podID="4acefc9f-f68a-4566-a0f5-656b961d4267" containerID="8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140" exitCode=2 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.724584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerDied","Data":"8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.724617 4813 scope.go:117] "RemoveContainer" containerID="c14e01b3e546a35f1fdea720cf54f5295f06fe82ae568d24e7bff9872eec0753" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.724989 4813 scope.go:117] "RemoveContainer" containerID="8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.725241 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7cjx7_openshift-multus(4acefc9f-f68a-4566-a0f5-656b961d4267)\"" pod="openshift-multus/multus-7cjx7" podUID="4acefc9f-f68a-4566-a0f5-656b961d4267" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.726916 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/3.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.730886 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-acl-logging/0.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731272 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-controller/0.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731510 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="4b1ad71bda3ef6253797677910c06ead6fe138502b74dd7e52b23bd3f946c8eb" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731531 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731538 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731545 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731551 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731558 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b" exitCode=0 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731565 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74" exitCode=143 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731572 4813 generic.go:334] "Generic (PLEG): container finished" podID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerID="f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d" exitCode=143 Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731590 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"4b1ad71bda3ef6253797677910c06ead6fe138502b74dd7e52b23bd3f946c8eb"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731621 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731631 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731639 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.731665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d"} Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.915250 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovnkube-controller/3.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.917766 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-acl-logging/0.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.918382 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-controller/0.log" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.918955 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.921912 4813 scope.go:117] "RemoveContainer" containerID="171b878e382b59dfeec68ec8266bb42df77af2928121594f1ae463be8460a609" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.922907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.922989 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923147 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt4dq\" (UniqueName: \"kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923295 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923342 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923390 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923418 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923421 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923454 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log" (OuterVolumeSpecName: "node-log") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923464 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923502 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923541 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923559 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923576 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923603 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923622 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923649 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923675 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923701 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923719 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923738 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units\") pod \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\" (UID: \"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5\") " Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.923798 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924023 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket" (OuterVolumeSpecName: "log-socket") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924081 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924083 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924169 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash" (OuterVolumeSpecName: "host-slash") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924289 4813 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924312 4813 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924324 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924335 4813 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924345 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924355 4813 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924366 4813 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924375 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924385 4813 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924395 4813 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924407 4813 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924417 4813 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924445 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924492 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924516 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.924626 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.932442 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq" (OuterVolumeSpecName: "kube-api-access-qt4dq") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "kube-api-access-qt4dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.932491 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.951745 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" (UID: "b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.990893 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5skqm"] Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991257 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991280 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991293 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991301 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="northd" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991320 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="northd" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991331 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="sbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991338 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="sbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991348 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991355 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991365 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fcf277-fd30-4c95-80b6-4c5199172c6d" containerName="registry" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991372 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fcf277-fd30-4c95-80b6-4c5199172c6d" containerName="registry" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991382 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991389 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991407 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kubecfg-setup" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991416 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kubecfg-setup" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991426 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991433 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991443 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-node" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991451 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-node" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991464 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="nbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991471 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="nbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.991484 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-acl-logging" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991491 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-acl-logging" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991618 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991681 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991695 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991707 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovn-acl-logging" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991719 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-node" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991734 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="nbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991743 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="northd" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991754 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991767 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991774 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991785 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fcf277-fd30-4c95-80b6-4c5199172c6d" containerName="registry" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.991796 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="sbdb" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.992020 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.992034 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: E0129 16:40:19.992045 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.992053 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.992206 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" containerName="ovnkube-controller" Jan 29 16:40:19 crc kubenswrapper[4813]: I0129 16:40:19.996589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024793 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-bin\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024847 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-kubelet\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-ovn\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-slash\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024955 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-log-socket\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024975 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-config\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.024999 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnhm\" (UniqueName: \"kubernetes.io/projected/728e9539-ad2b-4576-9b62-7872baf1ec30-kube-api-access-2qnhm\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-systemd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025057 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025081 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025104 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-etc-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025148 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-script-lib\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025288 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/728e9539-ad2b-4576-9b62-7872baf1ec30-ovn-node-metrics-cert\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025373 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-netns\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025468 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-var-lib-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.025490 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-node-log\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026102 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-env-overrides\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026128 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-netd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026154 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-systemd-units\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026353 4813 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026368 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026377 4813 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026410 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt4dq\" (UniqueName: \"kubernetes.io/projected/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-kube-api-access-qt4dq\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026903 4813 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026921 4813 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026932 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.026943 4813 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127532 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-systemd-units\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127790 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-systemd-units\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-bin\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-bin\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127898 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-kubelet\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127951 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-ovn\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-slash\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128002 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-log-socket\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-config\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128025 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-ovn\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qnhm\" (UniqueName: \"kubernetes.io/projected/728e9539-ad2b-4576-9b62-7872baf1ec30-kube-api-access-2qnhm\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128052 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-slash\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.127996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-kubelet\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128067 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-log-socket\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128156 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-systemd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-systemd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128350 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-ovn-kubernetes\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128370 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-etc-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-run-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128473 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/728e9539-ad2b-4576-9b62-7872baf1ec30-ovn-node-metrics-cert\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-etc-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-script-lib\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128537 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-netns\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128576 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-var-lib-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-node-log\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128673 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-env-overrides\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-netd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128761 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-config\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-cni-netd\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-var-lib-openvswitch\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.128865 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-node-log\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.129220 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/728e9539-ad2b-4576-9b62-7872baf1ec30-host-run-netns\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.129302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-ovnkube-script-lib\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.129407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/728e9539-ad2b-4576-9b62-7872baf1ec30-env-overrides\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.135436 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/728e9539-ad2b-4576-9b62-7872baf1ec30-ovn-node-metrics-cert\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.144649 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qnhm\" (UniqueName: \"kubernetes.io/projected/728e9539-ad2b-4576-9b62-7872baf1ec30-kube-api-access-2qnhm\") pod \"ovnkube-node-5skqm\" (UID: \"728e9539-ad2b-4576-9b62-7872baf1ec30\") " pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.311588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:20 crc kubenswrapper[4813]: W0129 16:40:20.326975 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728e9539_ad2b_4576_9b62_7872baf1ec30.slice/crio-67d814438110f411bd194faa6c0100209149588c71a210bd054ae05bdbc3ef20 WatchSource:0}: Error finding container 67d814438110f411bd194faa6c0100209149588c71a210bd054ae05bdbc3ef20: Status 404 returned error can't find the container with id 67d814438110f411bd194faa6c0100209149588c71a210bd054ae05bdbc3ef20 Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.739896 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/2.log" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.742749 4813 generic.go:334] "Generic (PLEG): container finished" podID="b560cafb-e64c-45b0-912d-1d086bfb8d20" containerID="94ac673de974ff52045f10a6edf6d5802fa491764220cc442ee5884f4de20167" exitCode=0 Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.742845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9jg2" event={"ID":"b560cafb-e64c-45b0-912d-1d086bfb8d20","Type":"ContainerDied","Data":"94ac673de974ff52045f10a6edf6d5802fa491764220cc442ee5884f4de20167"} Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.756663 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-acl-logging/0.log" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.757372 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cj2h6_b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/ovn-controller/0.log" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.757897 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.757929 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cj2h6" event={"ID":"b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5","Type":"ContainerDied","Data":"e82f7b6f58e9cb0911dfe6a2065c085032b3655729c62abc9462f1100edced2a"} Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.757980 4813 scope.go:117] "RemoveContainer" containerID="4b1ad71bda3ef6253797677910c06ead6fe138502b74dd7e52b23bd3f946c8eb" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.762477 4813 generic.go:334] "Generic (PLEG): container finished" podID="728e9539-ad2b-4576-9b62-7872baf1ec30" containerID="56ec8899e840dd0439c7df1530bfa2416bf8ec5cc5e6e1321c05b0005250836e" exitCode=0 Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.762511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerDied","Data":"56ec8899e840dd0439c7df1530bfa2416bf8ec5cc5e6e1321c05b0005250836e"} Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.762534 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"67d814438110f411bd194faa6c0100209149588c71a210bd054ae05bdbc3ef20"} Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.778676 4813 scope.go:117] "RemoveContainer" containerID="d5222c0cfbf48b769debd4c20f06f24a65226acabaaadd88947bd0118d2ab106" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.798203 4813 scope.go:117] "RemoveContainer" containerID="b873c961948b3653098d179a429a0cf141e4a1cf42596e1b4042185747bd4cbd" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.824826 4813 scope.go:117] "RemoveContainer" containerID="e02382ae95ecc20f68eb76855a5245914add3ddac83ccd02927dfcbc94f3a342" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.842491 4813 scope.go:117] "RemoveContainer" containerID="ca87eb3f53feaab1ff29be395e2ddfe800cbb71bff6203eb0fcefc3be3592a96" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.871700 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj2h6"] Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.878372 4813 scope.go:117] "RemoveContainer" containerID="4c5fda58ab09293a80250754ed8f0ba96f0b08143f2d369221683ec9858cb17b" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.878969 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cj2h6"] Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.906356 4813 scope.go:117] "RemoveContainer" containerID="03d3b96f389e3d7d7e98329e8212fdd420f1532208b46c17bcf87ebc09363f74" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.926769 4813 scope.go:117] "RemoveContainer" containerID="f09bfa83f80cda5ebc9f999a494335830650a6d779c8b33090f7a24cbf20d43d" Jan 29 16:40:20 crc kubenswrapper[4813]: I0129 16:40:20.942164 4813 scope.go:117] "RemoveContainer" containerID="537f26d78b0b5d3da90f4710f64043bb670d13783b55c2e444a443f540afd152" Jan 29 16:40:21 crc kubenswrapper[4813]: I0129 16:40:21.771081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"c42a559148d1f6f81451b4972e124afc13039251f6ff2f46fc8cbfb69128f5c0"} Jan 29 16:40:21 crc kubenswrapper[4813]: I0129 16:40:21.771404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"8ca64c3d033b80f6cac3165044d6d094d486572b75a0fec9f39ea58d255aa78d"} Jan 29 16:40:22 crc kubenswrapper[4813]: I0129 16:40:22.250400 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5" path="/var/lib/kubelet/pods/b1ba0a14-c0cd-40c8-ab31-e106a7d0b0e5/volumes" Jan 29 16:40:22 crc kubenswrapper[4813]: I0129 16:40:22.780288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"408cbe7a1093c98b15925a6a4413da73fad97197cd42e02e09b7ff29cca06b98"} Jan 29 16:40:22 crc kubenswrapper[4813]: I0129 16:40:22.780336 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"3a754b81bc06d56fd84a9b831f8c93c5ec48dc326392c84ad4d1676ea19b9315"} Jan 29 16:40:23 crc kubenswrapper[4813]: I0129 16:40:23.807747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"97027939c9038aa45683be0d22b3b453e1d37ceb36fd605d3d0a333b3f8d55b4"} Jan 29 16:40:23 crc kubenswrapper[4813]: I0129 16:40:23.814503 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9jg2" event={"ID":"b560cafb-e64c-45b0-912d-1d086bfb8d20","Type":"ContainerStarted","Data":"bc2b3483b6e57025d81e7982f9c2286be12d77c50e0c023a865890b11fb98add"} Jan 29 16:40:23 crc kubenswrapper[4813]: I0129 16:40:23.841200 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p9jg2" podStartSLOduration=1.971973287 podStartE2EDuration="5m55.841162494s" podCreationTimestamp="2026-01-29 16:34:28 +0000 UTC" firstStartedPulling="2026-01-29 16:34:29.525227525 +0000 UTC m=+322.012430731" lastFinishedPulling="2026-01-29 16:40:23.394416722 +0000 UTC m=+675.881619938" observedRunningTime="2026-01-29 16:40:23.836715687 +0000 UTC m=+676.323918903" watchObservedRunningTime="2026-01-29 16:40:23.841162494 +0000 UTC m=+676.328365710" Jan 29 16:40:24 crc kubenswrapper[4813]: I0129 16:40:24.821861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"473c12e54d08ce98554833f0b076f11908ca9157bf58ed86c8daeaa2fdd428a7"} Jan 29 16:40:26 crc kubenswrapper[4813]: I0129 16:40:26.836517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"595742ac1cc483a3af6714d67ee1210c62cd716e742df850d37e061387cf003a"} Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.766727 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2jbt9"] Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.767543 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.770353 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.770459 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.770350 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.770631 4813 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vs5nw" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.940293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s76g6\" (UniqueName: \"kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.940358 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.940411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.977008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:40:28 crc kubenswrapper[4813]: I0129 16:40:28.977072 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.024444 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.041624 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s76g6\" (UniqueName: \"kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.041675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.041699 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.042418 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.042588 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.065016 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s76g6\" (UniqueName: \"kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6\") pod \"crc-storage-crc-2jbt9\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.089181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:29 crc kubenswrapper[4813]: I0129 16:40:29.888956 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p9jg2" Jan 29 16:40:30 crc kubenswrapper[4813]: E0129 16:40:30.068609 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(1b70bdf631abd8908e9e18b2d1877cff8e2a03d05c2af10b866945f3990941d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:40:30 crc kubenswrapper[4813]: E0129 16:40:30.068802 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(1b70bdf631abd8908e9e18b2d1877cff8e2a03d05c2af10b866945f3990941d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:30 crc kubenswrapper[4813]: E0129 16:40:30.068822 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(1b70bdf631abd8908e9e18b2d1877cff8e2a03d05c2af10b866945f3990941d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:30 crc kubenswrapper[4813]: E0129 16:40:30.068871 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2jbt9_crc-storage(c3cbf7df-c4bd-422d-ac85-e28e505af034)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2jbt9_crc-storage(c3cbf7df-c4bd-422d-ac85-e28e505af034)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(1b70bdf631abd8908e9e18b2d1877cff8e2a03d05c2af10b866945f3990941d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2jbt9" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.240546 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.240808 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.858627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" event={"ID":"728e9539-ad2b-4576-9b62-7872baf1ec30","Type":"ContainerStarted","Data":"435b4164bec1203a142b16b19e3e13f9aa76b71febbec1601f2268ced0049e64"} Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.859080 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.859163 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.859182 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.860457 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6m45" event={"ID":"4e573ebb-94a9-440d-bba1-58251a12dfb9","Type":"ContainerStarted","Data":"7fae6581da4cd4e9d6d4ebb1c07a2681dd4119f2ffb77eeaba5e4af1cffc2735"} Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.893089 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" podStartSLOduration=11.893068222 podStartE2EDuration="11.893068222s" podCreationTimestamp="2026-01-29 16:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:40:30.88595465 +0000 UTC m=+683.373157876" watchObservedRunningTime="2026-01-29 16:40:30.893068222 +0000 UTC m=+683.380271438" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.903384 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.904336 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.974170 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2jbt9"] Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.974286 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:30 crc kubenswrapper[4813]: I0129 16:40:30.974773 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:31 crc kubenswrapper[4813]: I0129 16:40:31.868728 4813 generic.go:334] "Generic (PLEG): container finished" podID="4e573ebb-94a9-440d-bba1-58251a12dfb9" containerID="7fae6581da4cd4e9d6d4ebb1c07a2681dd4119f2ffb77eeaba5e4af1cffc2735" exitCode=0 Jan 29 16:40:31 crc kubenswrapper[4813]: I0129 16:40:31.868825 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6m45" event={"ID":"4e573ebb-94a9-440d-bba1-58251a12dfb9","Type":"ContainerDied","Data":"7fae6581da4cd4e9d6d4ebb1c07a2681dd4119f2ffb77eeaba5e4af1cffc2735"} Jan 29 16:40:32 crc kubenswrapper[4813]: I0129 16:40:32.240766 4813 scope.go:117] "RemoveContainer" containerID="8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140" Jan 29 16:40:32 crc kubenswrapper[4813]: E0129 16:40:32.240948 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-7cjx7_openshift-multus(4acefc9f-f68a-4566-a0f5-656b961d4267)\"" pod="openshift-multus/multus-7cjx7" podUID="4acefc9f-f68a-4566-a0f5-656b961d4267" Jan 29 16:40:32 crc kubenswrapper[4813]: E0129 16:40:32.904273 4813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(773cccaee191885c01aa8ae3837e4278e7f53dbffb271e7b22fd1075b092522d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 16:40:32 crc kubenswrapper[4813]: E0129 16:40:32.904338 4813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(773cccaee191885c01aa8ae3837e4278e7f53dbffb271e7b22fd1075b092522d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:32 crc kubenswrapper[4813]: E0129 16:40:32.904359 4813 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(773cccaee191885c01aa8ae3837e4278e7f53dbffb271e7b22fd1075b092522d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:32 crc kubenswrapper[4813]: E0129 16:40:32.904399 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2jbt9_crc-storage(c3cbf7df-c4bd-422d-ac85-e28e505af034)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2jbt9_crc-storage(c3cbf7df-c4bd-422d-ac85-e28e505af034)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2jbt9_crc-storage_c3cbf7df-c4bd-422d-ac85-e28e505af034_0(773cccaee191885c01aa8ae3837e4278e7f53dbffb271e7b22fd1075b092522d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2jbt9" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" Jan 29 16:40:33 crc kubenswrapper[4813]: I0129 16:40:33.879463 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6m45" event={"ID":"4e573ebb-94a9-440d-bba1-58251a12dfb9","Type":"ContainerStarted","Data":"8457808f420d3fc6c9cffdf6b5d95b724bd9788f9492c2c4595fcdf6dcdee167"} Jan 29 16:40:33 crc kubenswrapper[4813]: I0129 16:40:33.882175 4813 generic.go:334] "Generic (PLEG): container finished" podID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" containerID="ce4941299c35f60570ddcf4a229cddd1f82a41c261e2db57eb7dcb44dde96a25" exitCode=0 Jan 29 16:40:33 crc kubenswrapper[4813]: I0129 16:40:33.882284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c24v5" event={"ID":"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1","Type":"ContainerDied","Data":"ce4941299c35f60570ddcf4a229cddd1f82a41c261e2db57eb7dcb44dde96a25"} Jan 29 16:40:33 crc kubenswrapper[4813]: I0129 16:40:33.901350 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6m45" podStartSLOduration=2.065567474 podStartE2EDuration="6m7.901332542s" podCreationTimestamp="2026-01-29 16:34:26 +0000 UTC" firstStartedPulling="2026-01-29 16:34:27.510593575 +0000 UTC m=+319.997796791" lastFinishedPulling="2026-01-29 16:40:33.346358643 +0000 UTC m=+685.833561859" observedRunningTime="2026-01-29 16:40:33.900364325 +0000 UTC m=+686.387567751" watchObservedRunningTime="2026-01-29 16:40:33.901332542 +0000 UTC m=+686.388535758" Jan 29 16:40:35 crc kubenswrapper[4813]: I0129 16:40:35.895985 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c24v5" event={"ID":"3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1","Type":"ContainerStarted","Data":"9097ffb64d22fc42c778ccba8e4234a5f36abff4c9e0e24cc45cc8f7c3ea8a75"} Jan 29 16:40:35 crc kubenswrapper[4813]: I0129 16:40:35.921253 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c24v5" podStartSLOduration=2.727363404 podStartE2EDuration="6m9.921233082s" podCreationTimestamp="2026-01-29 16:34:26 +0000 UTC" firstStartedPulling="2026-01-29 16:34:27.511880945 +0000 UTC m=+319.999084161" lastFinishedPulling="2026-01-29 16:40:34.705750623 +0000 UTC m=+687.192953839" observedRunningTime="2026-01-29 16:40:35.916993902 +0000 UTC m=+688.404197118" watchObservedRunningTime="2026-01-29 16:40:35.921233082 +0000 UTC m=+688.408436318" Jan 29 16:40:36 crc kubenswrapper[4813]: I0129 16:40:36.388845 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:40:36 crc kubenswrapper[4813]: I0129 16:40:36.389217 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:40:36 crc kubenswrapper[4813]: I0129 16:40:36.428035 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:40:36 crc kubenswrapper[4813]: I0129 16:40:36.581900 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:40:36 crc kubenswrapper[4813]: I0129 16:40:36.581965 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:40:37 crc kubenswrapper[4813]: I0129 16:40:37.615741 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c24v5" podUID="3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1" containerName="registry-server" probeResult="failure" output=< Jan 29 16:40:37 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Jan 29 16:40:37 crc kubenswrapper[4813]: > Jan 29 16:40:43 crc kubenswrapper[4813]: I0129 16:40:43.239623 4813 scope.go:117] "RemoveContainer" containerID="8ee8d82a364f06cced80be93c75ae35206187559f390e9f2369d41c941699140" Jan 29 16:40:43 crc kubenswrapper[4813]: I0129 16:40:43.938833 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-7cjx7_4acefc9f-f68a-4566-a0f5-656b961d4267/kube-multus/2.log" Jan 29 16:40:43 crc kubenswrapper[4813]: I0129 16:40:43.939276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-7cjx7" event={"ID":"4acefc9f-f68a-4566-a0f5-656b961d4267","Type":"ContainerStarted","Data":"3a98f4c3879a9310ebb0ac3009f5023ca3b04f989427d64cf5dcea7e674132c1"} Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.239689 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.240303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.421624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2jbt9"] Jan 29 16:40:46 crc kubenswrapper[4813]: W0129 16:40:46.432665 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3cbf7df_c4bd_422d_ac85_e28e505af034.slice/crio-5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d WatchSource:0}: Error finding container 5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d: Status 404 returned error can't find the container with id 5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.572677 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6m45" Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.630559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.675159 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c24v5" Jan 29 16:40:46 crc kubenswrapper[4813]: I0129 16:40:46.954272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2jbt9" event={"ID":"c3cbf7df-c4bd-422d-ac85-e28e505af034","Type":"ContainerStarted","Data":"5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d"} Jan 29 16:40:50 crc kubenswrapper[4813]: I0129 16:40:50.334833 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5skqm" Jan 29 16:41:00 crc kubenswrapper[4813]: I0129 16:41:00.240104 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:00 crc kubenswrapper[4813]: I0129 16:41:00.241566 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:04 crc kubenswrapper[4813]: E0129 16:41:04.673921 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/bash:latest" Jan 29 16:41:04 crc kubenswrapper[4813]: E0129 16:41:04.674554 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage,Image:quay.io/openstack-k8s-operators/bash:latest,Command:[bash],Args:[/usr/local/bin/crc-storage.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:PV_NUM,Value:12,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:crc-storage,ReadOnly:true,MountPath:/usr/local/bin/crc-storage.sh,SubPath:create-storage.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-mnt,ReadOnly:false,MountPath:/mnt/nodeMnt,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s76g6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-storage-crc-2jbt9_crc-storage(c3cbf7df-c4bd-422d-ac85-e28e505af034): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:41:04 crc kubenswrapper[4813]: E0129 16:41:04.676076 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="crc-storage/crc-storage-crc-2jbt9" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" Jan 29 16:41:05 crc kubenswrapper[4813]: E0129 16:41:05.048417 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/bash:latest\\\"\"" pod="crc-storage/crc-storage-crc-2jbt9" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" Jan 29 16:41:20 crc kubenswrapper[4813]: I0129 16:41:20.128775 4813 generic.go:334] "Generic (PLEG): container finished" podID="c3cbf7df-c4bd-422d-ac85-e28e505af034" containerID="d4673be574fbca26dc7164a32fbceacd6e4746e80697b09514841c083f18f8a5" exitCode=0 Jan 29 16:41:20 crc kubenswrapper[4813]: I0129 16:41:20.128938 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2jbt9" event={"ID":"c3cbf7df-c4bd-422d-ac85-e28e505af034","Type":"ContainerDied","Data":"d4673be574fbca26dc7164a32fbceacd6e4746e80697b09514841c083f18f8a5"} Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.398348 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.528177 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage\") pod \"c3cbf7df-c4bd-422d-ac85-e28e505af034\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.528241 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt\") pod \"c3cbf7df-c4bd-422d-ac85-e28e505af034\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.528277 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s76g6\" (UniqueName: \"kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6\") pod \"c3cbf7df-c4bd-422d-ac85-e28e505af034\" (UID: \"c3cbf7df-c4bd-422d-ac85-e28e505af034\") " Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.529622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "c3cbf7df-c4bd-422d-ac85-e28e505af034" (UID: "c3cbf7df-c4bd-422d-ac85-e28e505af034"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.534151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6" (OuterVolumeSpecName: "kube-api-access-s76g6") pod "c3cbf7df-c4bd-422d-ac85-e28e505af034" (UID: "c3cbf7df-c4bd-422d-ac85-e28e505af034"). InnerVolumeSpecName "kube-api-access-s76g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.552924 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "c3cbf7df-c4bd-422d-ac85-e28e505af034" (UID: "c3cbf7df-c4bd-422d-ac85-e28e505af034"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.629579 4813 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c3cbf7df-c4bd-422d-ac85-e28e505af034-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.629619 4813 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c3cbf7df-c4bd-422d-ac85-e28e505af034-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:21 crc kubenswrapper[4813]: I0129 16:41:21.629629 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s76g6\" (UniqueName: \"kubernetes.io/projected/c3cbf7df-c4bd-422d-ac85-e28e505af034-kube-api-access-s76g6\") on node \"crc\" DevicePath \"\"" Jan 29 16:41:22 crc kubenswrapper[4813]: I0129 16:41:22.139944 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2jbt9" event={"ID":"c3cbf7df-c4bd-422d-ac85-e28e505af034","Type":"ContainerDied","Data":"5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d"} Jan 29 16:41:22 crc kubenswrapper[4813]: I0129 16:41:22.139979 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cefd51f0078610b44cef36fc93850eed98231c64e4e7a2c75296a7b1dd2441d" Jan 29 16:41:22 crc kubenswrapper[4813]: I0129 16:41:22.140017 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2jbt9" Jan 29 16:41:30 crc kubenswrapper[4813]: I0129 16:41:30.240300 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:41:30 crc kubenswrapper[4813]: I0129 16:41:30.241262 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:41:30 crc kubenswrapper[4813]: I0129 16:41:30.247748 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:41:30 crc kubenswrapper[4813]: I0129 16:41:30.248299 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:41:30 crc kubenswrapper[4813]: I0129 16:41:30.248376 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e" gracePeriod=600 Jan 29 16:41:31 crc kubenswrapper[4813]: I0129 16:41:31.206927 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e" exitCode=0 Jan 29 16:41:31 crc kubenswrapper[4813]: I0129 16:41:31.207063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e"} Jan 29 16:41:31 crc kubenswrapper[4813]: I0129 16:41:31.207558 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f"} Jan 29 16:41:31 crc kubenswrapper[4813]: I0129 16:41:31.207592 4813 scope.go:117] "RemoveContainer" containerID="f0fa0ba48c5aa1f9abd6566dd548d637e9193bba4c1da553e75757301ad99b41" Jan 29 16:41:54 crc kubenswrapper[4813]: I0129 16:41:54.531308 4813 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.761167 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:38 crc kubenswrapper[4813]: E0129 16:42:38.761902 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" containerName="storage" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.761915 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" containerName="storage" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.762012 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cbf7df-c4bd-422d-ac85-e28e505af034" containerName="storage" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.762723 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.778392 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.849643 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.849723 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.849846 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88vx\" (UniqueName: \"kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.951057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.951134 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k88vx\" (UniqueName: \"kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.951179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.951651 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.951689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:38 crc kubenswrapper[4813]: I0129 16:42:38.971525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k88vx\" (UniqueName: \"kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx\") pod \"certified-operators-chglh\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:39 crc kubenswrapper[4813]: I0129 16:42:39.079295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:39 crc kubenswrapper[4813]: I0129 16:42:39.528642 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:39 crc kubenswrapper[4813]: I0129 16:42:39.586201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerStarted","Data":"5a7c6853d2607c3eefc7d59a9d09550502b3ff968c308f029c9acfd7b9a2bca3"} Jan 29 16:42:40 crc kubenswrapper[4813]: I0129 16:42:40.593250 4813 generic.go:334] "Generic (PLEG): container finished" podID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerID="0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70" exitCode=0 Jan 29 16:42:40 crc kubenswrapper[4813]: I0129 16:42:40.593446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerDied","Data":"0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70"} Jan 29 16:42:40 crc kubenswrapper[4813]: I0129 16:42:40.597502 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:42:41 crc kubenswrapper[4813]: I0129 16:42:41.601821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerStarted","Data":"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f"} Jan 29 16:42:42 crc kubenswrapper[4813]: I0129 16:42:42.613136 4813 generic.go:334] "Generic (PLEG): container finished" podID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerID="45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f" exitCode=0 Jan 29 16:42:42 crc kubenswrapper[4813]: I0129 16:42:42.613214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerDied","Data":"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f"} Jan 29 16:42:43 crc kubenswrapper[4813]: I0129 16:42:43.622018 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerStarted","Data":"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b"} Jan 29 16:42:43 crc kubenswrapper[4813]: I0129 16:42:43.644301 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chglh" podStartSLOduration=3.166220079 podStartE2EDuration="5.644277147s" podCreationTimestamp="2026-01-29 16:42:38 +0000 UTC" firstStartedPulling="2026-01-29 16:42:40.596405828 +0000 UTC m=+813.083609054" lastFinishedPulling="2026-01-29 16:42:43.074462906 +0000 UTC m=+815.561666122" observedRunningTime="2026-01-29 16:42:43.639656973 +0000 UTC m=+816.126860189" watchObservedRunningTime="2026-01-29 16:42:43.644277147 +0000 UTC m=+816.131480363" Jan 29 16:42:49 crc kubenswrapper[4813]: I0129 16:42:49.080491 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:49 crc kubenswrapper[4813]: I0129 16:42:49.080552 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:49 crc kubenswrapper[4813]: I0129 16:42:49.124601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:49 crc kubenswrapper[4813]: I0129 16:42:49.698041 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:49 crc kubenswrapper[4813]: I0129 16:42:49.750242 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:51 crc kubenswrapper[4813]: I0129 16:42:51.665010 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chglh" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="registry-server" containerID="cri-o://d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b" gracePeriod=2 Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.013269 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.114212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities\") pod \"576352f7-5900-40ec-b20d-f9b08c8766aa\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.114263 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content\") pod \"576352f7-5900-40ec-b20d-f9b08c8766aa\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.114320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k88vx\" (UniqueName: \"kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx\") pod \"576352f7-5900-40ec-b20d-f9b08c8766aa\" (UID: \"576352f7-5900-40ec-b20d-f9b08c8766aa\") " Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.115262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities" (OuterVolumeSpecName: "utilities") pod "576352f7-5900-40ec-b20d-f9b08c8766aa" (UID: "576352f7-5900-40ec-b20d-f9b08c8766aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.121288 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx" (OuterVolumeSpecName: "kube-api-access-k88vx") pod "576352f7-5900-40ec-b20d-f9b08c8766aa" (UID: "576352f7-5900-40ec-b20d-f9b08c8766aa"). InnerVolumeSpecName "kube-api-access-k88vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.165746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "576352f7-5900-40ec-b20d-f9b08c8766aa" (UID: "576352f7-5900-40ec-b20d-f9b08c8766aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.215536 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.215575 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/576352f7-5900-40ec-b20d-f9b08c8766aa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.215587 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k88vx\" (UniqueName: \"kubernetes.io/projected/576352f7-5900-40ec-b20d-f9b08c8766aa-kube-api-access-k88vx\") on node \"crc\" DevicePath \"\"" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.674483 4813 generic.go:334] "Generic (PLEG): container finished" podID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerID="d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b" exitCode=0 Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.674534 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerDied","Data":"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b"} Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.674608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chglh" event={"ID":"576352f7-5900-40ec-b20d-f9b08c8766aa","Type":"ContainerDied","Data":"5a7c6853d2607c3eefc7d59a9d09550502b3ff968c308f029c9acfd7b9a2bca3"} Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.674621 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chglh" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.674631 4813 scope.go:117] "RemoveContainer" containerID="d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.697517 4813 scope.go:117] "RemoveContainer" containerID="45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.701898 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.708717 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chglh"] Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.712987 4813 scope.go:117] "RemoveContainer" containerID="0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.733959 4813 scope.go:117] "RemoveContainer" containerID="d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b" Jan 29 16:42:52 crc kubenswrapper[4813]: E0129 16:42:52.734660 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b\": container with ID starting with d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b not found: ID does not exist" containerID="d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.734721 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b"} err="failed to get container status \"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b\": rpc error: code = NotFound desc = could not find container \"d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b\": container with ID starting with d2ef37e948745e1bf4e6dafceb0881c4d1187a9a4b30289438574c90af671a3b not found: ID does not exist" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.734755 4813 scope.go:117] "RemoveContainer" containerID="45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f" Jan 29 16:42:52 crc kubenswrapper[4813]: E0129 16:42:52.735245 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f\": container with ID starting with 45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f not found: ID does not exist" containerID="45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.735313 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f"} err="failed to get container status \"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f\": rpc error: code = NotFound desc = could not find container \"45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f\": container with ID starting with 45ab7af841da207760040a79ce18358f1ff6fe4c947ca8113cc756bbb4c5466f not found: ID does not exist" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.735366 4813 scope.go:117] "RemoveContainer" containerID="0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70" Jan 29 16:42:52 crc kubenswrapper[4813]: E0129 16:42:52.735806 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70\": container with ID starting with 0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70 not found: ID does not exist" containerID="0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70" Jan 29 16:42:52 crc kubenswrapper[4813]: I0129 16:42:52.735840 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70"} err="failed to get container status \"0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70\": rpc error: code = NotFound desc = could not find container \"0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70\": container with ID starting with 0b04d5c13ec51960c5814a3c242b526ff6ff1654e0e3fb9ccfd276422b0b1e70 not found: ID does not exist" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.247529 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" path="/var/lib/kubelet/pods/576352f7-5900-40ec-b20d-f9b08c8766aa/volumes" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.777778 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:42:54 crc kubenswrapper[4813]: E0129 16:42:54.778182 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="extract-content" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.778213 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="extract-content" Jan 29 16:42:54 crc kubenswrapper[4813]: E0129 16:42:54.778232 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="extract-utilities" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.778248 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="extract-utilities" Jan 29 16:42:54 crc kubenswrapper[4813]: E0129 16:42:54.778266 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="registry-server" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.778288 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="registry-server" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.778523 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="576352f7-5900-40ec-b20d-f9b08c8766aa" containerName="registry-server" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.780118 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.785289 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.949331 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.949453 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7bft\" (UniqueName: \"kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:54 crc kubenswrapper[4813]: I0129 16:42:54.949495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.050609 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.050704 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.051142 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.051222 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.051434 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7bft\" (UniqueName: \"kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.075002 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7bft\" (UniqueName: \"kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft\") pod \"community-operators-6cs9w\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.103856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.352479 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.694433 4813 generic.go:334] "Generic (PLEG): container finished" podID="0cdc541c-e837-40eb-b6af-634482c096c7" containerID="21567b7dd1a1aed7108f7888cb98b1a41c70c948f81690bb738dece88ab9a131" exitCode=0 Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.694538 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerDied","Data":"21567b7dd1a1aed7108f7888cb98b1a41c70c948f81690bb738dece88ab9a131"} Jan 29 16:42:55 crc kubenswrapper[4813]: I0129 16:42:55.694794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerStarted","Data":"c32f53097dc2f2c5233ca416c95b6387322cb5438726ff366d8b76736caf406c"} Jan 29 16:42:57 crc kubenswrapper[4813]: E0129 16:42:57.204437 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cdc541c_e837_40eb_b6af_634482c096c7.slice/crio-conmon-225e04f8f1987514ec53711ac8e9312545c7da5404b5915263402e05eae41493.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:42:57 crc kubenswrapper[4813]: I0129 16:42:57.707329 4813 generic.go:334] "Generic (PLEG): container finished" podID="0cdc541c-e837-40eb-b6af-634482c096c7" containerID="225e04f8f1987514ec53711ac8e9312545c7da5404b5915263402e05eae41493" exitCode=0 Jan 29 16:42:57 crc kubenswrapper[4813]: I0129 16:42:57.707396 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerDied","Data":"225e04f8f1987514ec53711ac8e9312545c7da5404b5915263402e05eae41493"} Jan 29 16:42:58 crc kubenswrapper[4813]: I0129 16:42:58.715730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerStarted","Data":"66fd9cf72bd52250b399a8bf505e97ca546dd29847b04c4c4836fb9af5701c87"} Jan 29 16:42:58 crc kubenswrapper[4813]: I0129 16:42:58.736955 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6cs9w" podStartSLOduration=2.350088607 podStartE2EDuration="4.736929805s" podCreationTimestamp="2026-01-29 16:42:54 +0000 UTC" firstStartedPulling="2026-01-29 16:42:55.697077948 +0000 UTC m=+828.184281164" lastFinishedPulling="2026-01-29 16:42:58.083919146 +0000 UTC m=+830.571122362" observedRunningTime="2026-01-29 16:42:58.736562854 +0000 UTC m=+831.223766070" watchObservedRunningTime="2026-01-29 16:42:58.736929805 +0000 UTC m=+831.224133021" Jan 29 16:43:05 crc kubenswrapper[4813]: I0129 16:43:05.104807 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:05 crc kubenswrapper[4813]: I0129 16:43:05.105352 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:05 crc kubenswrapper[4813]: I0129 16:43:05.147916 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:05 crc kubenswrapper[4813]: I0129 16:43:05.795785 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:05 crc kubenswrapper[4813]: I0129 16:43:05.854372 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:43:07 crc kubenswrapper[4813]: I0129 16:43:07.767464 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6cs9w" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="registry-server" containerID="cri-o://66fd9cf72bd52250b399a8bf505e97ca546dd29847b04c4c4836fb9af5701c87" gracePeriod=2 Jan 29 16:43:08 crc kubenswrapper[4813]: I0129 16:43:08.774157 4813 generic.go:334] "Generic (PLEG): container finished" podID="0cdc541c-e837-40eb-b6af-634482c096c7" containerID="66fd9cf72bd52250b399a8bf505e97ca546dd29847b04c4c4836fb9af5701c87" exitCode=0 Jan 29 16:43:08 crc kubenswrapper[4813]: I0129 16:43:08.774407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerDied","Data":"66fd9cf72bd52250b399a8bf505e97ca546dd29847b04c4c4836fb9af5701c87"} Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.250873 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.427868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content\") pod \"0cdc541c-e837-40eb-b6af-634482c096c7\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.427937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities\") pod \"0cdc541c-e837-40eb-b6af-634482c096c7\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.427977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7bft\" (UniqueName: \"kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft\") pod \"0cdc541c-e837-40eb-b6af-634482c096c7\" (UID: \"0cdc541c-e837-40eb-b6af-634482c096c7\") " Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.428783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities" (OuterVolumeSpecName: "utilities") pod "0cdc541c-e837-40eb-b6af-634482c096c7" (UID: "0cdc541c-e837-40eb-b6af-634482c096c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.433142 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft" (OuterVolumeSpecName: "kube-api-access-t7bft") pod "0cdc541c-e837-40eb-b6af-634482c096c7" (UID: "0cdc541c-e837-40eb-b6af-634482c096c7"). InnerVolumeSpecName "kube-api-access-t7bft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.473696 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cdc541c-e837-40eb-b6af-634482c096c7" (UID: "0cdc541c-e837-40eb-b6af-634482c096c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.529047 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7bft\" (UniqueName: \"kubernetes.io/projected/0cdc541c-e837-40eb-b6af-634482c096c7-kube-api-access-t7bft\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.529078 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.529087 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cdc541c-e837-40eb-b6af-634482c096c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.782401 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cs9w" event={"ID":"0cdc541c-e837-40eb-b6af-634482c096c7","Type":"ContainerDied","Data":"c32f53097dc2f2c5233ca416c95b6387322cb5438726ff366d8b76736caf406c"} Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.782454 4813 scope.go:117] "RemoveContainer" containerID="66fd9cf72bd52250b399a8bf505e97ca546dd29847b04c4c4836fb9af5701c87" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.782573 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cs9w" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.797502 4813 scope.go:117] "RemoveContainer" containerID="225e04f8f1987514ec53711ac8e9312545c7da5404b5915263402e05eae41493" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.809349 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.815729 4813 scope.go:117] "RemoveContainer" containerID="21567b7dd1a1aed7108f7888cb98b1a41c70c948f81690bb738dece88ab9a131" Jan 29 16:43:09 crc kubenswrapper[4813]: I0129 16:43:09.816486 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6cs9w"] Jan 29 16:43:10 crc kubenswrapper[4813]: I0129 16:43:10.246801 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" path="/var/lib/kubelet/pods/0cdc541c-e837-40eb-b6af-634482c096c7/volumes" Jan 29 16:43:30 crc kubenswrapper[4813]: I0129 16:43:30.240431 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:43:30 crc kubenswrapper[4813]: I0129 16:43:30.241054 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.064891 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:43:53 crc kubenswrapper[4813]: E0129 16:43:53.065884 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="registry-server" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.065899 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="registry-server" Jan 29 16:43:53 crc kubenswrapper[4813]: E0129 16:43:53.065912 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="extract-utilities" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.065918 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="extract-utilities" Jan 29 16:43:53 crc kubenswrapper[4813]: E0129 16:43:53.065927 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="extract-content" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.065935 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="extract-content" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.066052 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdc541c-e837-40eb-b6af-634482c096c7" containerName="registry-server" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.066845 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.086462 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.115196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.115397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2fdm\" (UniqueName: \"kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.115481 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.217173 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2fdm\" (UniqueName: \"kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.217272 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.217341 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.217852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.217965 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.240990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2fdm\" (UniqueName: \"kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm\") pod \"redhat-operators-xzgvv\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.394783 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:43:53 crc kubenswrapper[4813]: I0129 16:43:53.626893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:43:54 crc kubenswrapper[4813]: I0129 16:43:54.016433 4813 generic.go:334] "Generic (PLEG): container finished" podID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerID="2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6" exitCode=0 Jan 29 16:43:54 crc kubenswrapper[4813]: I0129 16:43:54.016507 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerDied","Data":"2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6"} Jan 29 16:43:54 crc kubenswrapper[4813]: I0129 16:43:54.016573 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerStarted","Data":"ad109f9e1ccaa20216b47828df33129618a997f95fe32e187c3a1d68f998820b"} Jan 29 16:43:55 crc kubenswrapper[4813]: I0129 16:43:55.024202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerStarted","Data":"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838"} Jan 29 16:43:56 crc kubenswrapper[4813]: I0129 16:43:56.032797 4813 generic.go:334] "Generic (PLEG): container finished" podID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerID="9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838" exitCode=0 Jan 29 16:43:56 crc kubenswrapper[4813]: I0129 16:43:56.032860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerDied","Data":"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838"} Jan 29 16:43:58 crc kubenswrapper[4813]: I0129 16:43:58.049716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerStarted","Data":"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983"} Jan 29 16:43:58 crc kubenswrapper[4813]: I0129 16:43:58.076971 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xzgvv" podStartSLOduration=1.983083251 podStartE2EDuration="5.07695128s" podCreationTimestamp="2026-01-29 16:43:53 +0000 UTC" firstStartedPulling="2026-01-29 16:43:54.018525235 +0000 UTC m=+886.505728451" lastFinishedPulling="2026-01-29 16:43:57.112393264 +0000 UTC m=+889.599596480" observedRunningTime="2026-01-29 16:43:58.073815548 +0000 UTC m=+890.561018754" watchObservedRunningTime="2026-01-29 16:43:58.07695128 +0000 UTC m=+890.564154496" Jan 29 16:44:00 crc kubenswrapper[4813]: I0129 16:44:00.240258 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:00 crc kubenswrapper[4813]: I0129 16:44:00.240830 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:03 crc kubenswrapper[4813]: I0129 16:44:03.395815 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:03 crc kubenswrapper[4813]: I0129 16:44:03.395953 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:03 crc kubenswrapper[4813]: I0129 16:44:03.445158 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:04 crc kubenswrapper[4813]: I0129 16:44:04.131559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:04 crc kubenswrapper[4813]: I0129 16:44:04.187238 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.092367 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xzgvv" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="registry-server" containerID="cri-o://359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983" gracePeriod=2 Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.097393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.098976 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.105554 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.209298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.209375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpf4v\" (UniqueName: \"kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.209509 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.310412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.310479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpf4v\" (UniqueName: \"kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.310524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.310936 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.311007 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.330138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpf4v\" (UniqueName: \"kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v\") pod \"redhat-marketplace-b99lr\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.427126 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:06 crc kubenswrapper[4813]: I0129 16:44:06.610035 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.099439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerStarted","Data":"91da429c8aa5add2032231e91c2ddb8cbe39e4cda7b1a33b87e8b530833cfb93"} Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.520864 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.627054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2fdm\" (UniqueName: \"kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm\") pod \"f24f0ed1-7b9d-4892-a554-a441762aacab\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.627228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content\") pod \"f24f0ed1-7b9d-4892-a554-a441762aacab\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.627250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities\") pod \"f24f0ed1-7b9d-4892-a554-a441762aacab\" (UID: \"f24f0ed1-7b9d-4892-a554-a441762aacab\") " Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.628328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities" (OuterVolumeSpecName: "utilities") pod "f24f0ed1-7b9d-4892-a554-a441762aacab" (UID: "f24f0ed1-7b9d-4892-a554-a441762aacab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.634100 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm" (OuterVolumeSpecName: "kube-api-access-c2fdm") pod "f24f0ed1-7b9d-4892-a554-a441762aacab" (UID: "f24f0ed1-7b9d-4892-a554-a441762aacab"). InnerVolumeSpecName "kube-api-access-c2fdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.729136 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2fdm\" (UniqueName: \"kubernetes.io/projected/f24f0ed1-7b9d-4892-a554-a441762aacab-kube-api-access-c2fdm\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.729172 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.744698 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f24f0ed1-7b9d-4892-a554-a441762aacab" (UID: "f24f0ed1-7b9d-4892-a554-a441762aacab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:44:07 crc kubenswrapper[4813]: I0129 16:44:07.830278 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f24f0ed1-7b9d-4892-a554-a441762aacab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.106358 4813 generic.go:334] "Generic (PLEG): container finished" podID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerID="c19120fdbaab156ddcdf1de3ff3f3434983c99cf541e909e4fc87fa49892e334" exitCode=0 Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.106435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerDied","Data":"c19120fdbaab156ddcdf1de3ff3f3434983c99cf541e909e4fc87fa49892e334"} Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.112588 4813 generic.go:334] "Generic (PLEG): container finished" podID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerID="359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983" exitCode=0 Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.112634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerDied","Data":"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983"} Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.112674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xzgvv" event={"ID":"f24f0ed1-7b9d-4892-a554-a441762aacab","Type":"ContainerDied","Data":"ad109f9e1ccaa20216b47828df33129618a997f95fe32e187c3a1d68f998820b"} Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.112694 4813 scope.go:117] "RemoveContainer" containerID="359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.112718 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xzgvv" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.132368 4813 scope.go:117] "RemoveContainer" containerID="9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.149215 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.150888 4813 scope.go:117] "RemoveContainer" containerID="2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.152665 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xzgvv"] Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.167847 4813 scope.go:117] "RemoveContainer" containerID="359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983" Jan 29 16:44:08 crc kubenswrapper[4813]: E0129 16:44:08.168376 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983\": container with ID starting with 359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983 not found: ID does not exist" containerID="359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.168420 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983"} err="failed to get container status \"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983\": rpc error: code = NotFound desc = could not find container \"359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983\": container with ID starting with 359dd4caffaa4f8a21b5e042b93de368487ed312bc24d6ffe73308f200983983 not found: ID does not exist" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.168447 4813 scope.go:117] "RemoveContainer" containerID="9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838" Jan 29 16:44:08 crc kubenswrapper[4813]: E0129 16:44:08.169023 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838\": container with ID starting with 9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838 not found: ID does not exist" containerID="9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.169055 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838"} err="failed to get container status \"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838\": rpc error: code = NotFound desc = could not find container \"9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838\": container with ID starting with 9d1a993229c46876364fb22efa754dc79b2709a25adc3b0ddd52b25f9bc07838 not found: ID does not exist" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.169075 4813 scope.go:117] "RemoveContainer" containerID="2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6" Jan 29 16:44:08 crc kubenswrapper[4813]: E0129 16:44:08.169338 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6\": container with ID starting with 2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6 not found: ID does not exist" containerID="2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.169375 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6"} err="failed to get container status \"2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6\": rpc error: code = NotFound desc = could not find container \"2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6\": container with ID starting with 2fd1d1c8365c09b8d00a37dd82ecaf98b0b28ac7588214d2cd3dc907747febf6 not found: ID does not exist" Jan 29 16:44:08 crc kubenswrapper[4813]: I0129 16:44:08.248155 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" path="/var/lib/kubelet/pods/f24f0ed1-7b9d-4892-a554-a441762aacab/volumes" Jan 29 16:44:09 crc kubenswrapper[4813]: I0129 16:44:09.119639 4813 generic.go:334] "Generic (PLEG): container finished" podID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerID="807cb62c35fb102575aa62474a4dd6bb297ab7db132982b6840e21c355f0ad13" exitCode=0 Jan 29 16:44:09 crc kubenswrapper[4813]: I0129 16:44:09.119687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerDied","Data":"807cb62c35fb102575aa62474a4dd6bb297ab7db132982b6840e21c355f0ad13"} Jan 29 16:44:10 crc kubenswrapper[4813]: I0129 16:44:10.127081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerStarted","Data":"9932fd7bc5aa0212b38b4989166d06e54d6e2ffee528d84858483778964e1333"} Jan 29 16:44:16 crc kubenswrapper[4813]: I0129 16:44:16.428551 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:16 crc kubenswrapper[4813]: I0129 16:44:16.429094 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:16 crc kubenswrapper[4813]: I0129 16:44:16.469598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:16 crc kubenswrapper[4813]: I0129 16:44:16.491907 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b99lr" podStartSLOduration=9.00857422 podStartE2EDuration="10.491881013s" podCreationTimestamp="2026-01-29 16:44:06 +0000 UTC" firstStartedPulling="2026-01-29 16:44:08.109119044 +0000 UTC m=+900.596322250" lastFinishedPulling="2026-01-29 16:44:09.592425827 +0000 UTC m=+902.079629043" observedRunningTime="2026-01-29 16:44:10.147327825 +0000 UTC m=+902.634531031" watchObservedRunningTime="2026-01-29 16:44:16.491881013 +0000 UTC m=+908.979084229" Jan 29 16:44:17 crc kubenswrapper[4813]: I0129 16:44:17.203341 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:17 crc kubenswrapper[4813]: I0129 16:44:17.252494 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:19 crc kubenswrapper[4813]: I0129 16:44:19.173419 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b99lr" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="registry-server" containerID="cri-o://9932fd7bc5aa0212b38b4989166d06e54d6e2ffee528d84858483778964e1333" gracePeriod=2 Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.181864 4813 generic.go:334] "Generic (PLEG): container finished" podID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerID="9932fd7bc5aa0212b38b4989166d06e54d6e2ffee528d84858483778964e1333" exitCode=0 Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.181910 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerDied","Data":"9932fd7bc5aa0212b38b4989166d06e54d6e2ffee528d84858483778964e1333"} Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.181933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99lr" event={"ID":"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92","Type":"ContainerDied","Data":"91da429c8aa5add2032231e91c2ddb8cbe39e4cda7b1a33b87e8b530833cfb93"} Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.181944 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91da429c8aa5add2032231e91c2ddb8cbe39e4cda7b1a33b87e8b530833cfb93" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.202607 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.289440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content\") pod \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.289490 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpf4v\" (UniqueName: \"kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v\") pod \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.289521 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities\") pod \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\" (UID: \"420acdf4-bd18-4d2c-b2cc-d8232ccb0e92\") " Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.290466 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities" (OuterVolumeSpecName: "utilities") pod "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" (UID: "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.296828 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v" (OuterVolumeSpecName: "kube-api-access-mpf4v") pod "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" (UID: "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92"). InnerVolumeSpecName "kube-api-access-mpf4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.313986 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" (UID: "420acdf4-bd18-4d2c-b2cc-d8232ccb0e92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.391326 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.391358 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpf4v\" (UniqueName: \"kubernetes.io/projected/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-kube-api-access-mpf4v\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:20 crc kubenswrapper[4813]: I0129 16:44:20.391370 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:44:21 crc kubenswrapper[4813]: I0129 16:44:21.188061 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99lr" Jan 29 16:44:21 crc kubenswrapper[4813]: I0129 16:44:21.221557 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:21 crc kubenswrapper[4813]: I0129 16:44:21.226904 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99lr"] Jan 29 16:44:22 crc kubenswrapper[4813]: I0129 16:44:22.246219 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" path="/var/lib/kubelet/pods/420acdf4-bd18-4d2c-b2cc-d8232ccb0e92/volumes" Jan 29 16:44:30 crc kubenswrapper[4813]: I0129 16:44:30.240872 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:44:30 crc kubenswrapper[4813]: I0129 16:44:30.241545 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:44:30 crc kubenswrapper[4813]: I0129 16:44:30.248459 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:44:30 crc kubenswrapper[4813]: I0129 16:44:30.249918 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:44:30 crc kubenswrapper[4813]: I0129 16:44:30.250002 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f" gracePeriod=600 Jan 29 16:44:31 crc kubenswrapper[4813]: I0129 16:44:31.243881 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f" exitCode=0 Jan 29 16:44:31 crc kubenswrapper[4813]: I0129 16:44:31.243939 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f"} Jan 29 16:44:31 crc kubenswrapper[4813]: I0129 16:44:31.244415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8"} Jan 29 16:44:31 crc kubenswrapper[4813]: I0129 16:44:31.244439 4813 scope.go:117] "RemoveContainer" containerID="e2ffde0f15a6d79920502c2dc8d9a89818cad0a3e312892131ea0a2ca924ac8e" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.162613 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk"] Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163444 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163527 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163541 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163548 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163556 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="extract-content" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163569 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="extract-content" Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163580 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="extract-utilities" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163587 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="extract-utilities" Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163600 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="extract-content" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163608 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="extract-content" Jan 29 16:45:00 crc kubenswrapper[4813]: E0129 16:45:00.163620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="extract-utilities" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163628 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="extract-utilities" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163741 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24f0ed1-7b9d-4892-a554-a441762aacab" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.163761 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="420acdf4-bd18-4d2c-b2cc-d8232ccb0e92" containerName="registry-server" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.164246 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.168037 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.170646 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.174827 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk"] Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.339719 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.339864 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2j7k\" (UniqueName: \"kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.339924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.441686 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.442256 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2j7k\" (UniqueName: \"kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.442294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.443297 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.450852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.466242 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2j7k\" (UniqueName: \"kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k\") pod \"collect-profiles-29495085-4vplk\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.484052 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:00 crc kubenswrapper[4813]: I0129 16:45:00.680187 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk"] Jan 29 16:45:01 crc kubenswrapper[4813]: I0129 16:45:01.421467 4813 generic.go:334] "Generic (PLEG): container finished" podID="05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" containerID="1cc8dfac029bdaa7a481fadeaec6a3e80001eac67220e8dea219c8a24868b040" exitCode=0 Jan 29 16:45:01 crc kubenswrapper[4813]: I0129 16:45:01.421539 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" event={"ID":"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b","Type":"ContainerDied","Data":"1cc8dfac029bdaa7a481fadeaec6a3e80001eac67220e8dea219c8a24868b040"} Jan 29 16:45:01 crc kubenswrapper[4813]: I0129 16:45:01.421768 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" event={"ID":"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b","Type":"ContainerStarted","Data":"aff3c6e92fc46d3fd6f0996d0c3fe28456788bd40bd6d39fa7e1288723462b8a"} Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.688404 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.773848 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2j7k\" (UniqueName: \"kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k\") pod \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.773958 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume\") pod \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.774004 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume\") pod \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\" (UID: \"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b\") " Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.774770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" (UID: "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.779427 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" (UID: "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.779557 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k" (OuterVolumeSpecName: "kube-api-access-r2j7k") pod "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" (UID: "05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b"). InnerVolumeSpecName "kube-api-access-r2j7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.875070 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.875148 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:02 crc kubenswrapper[4813]: I0129 16:45:02.875168 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2j7k\" (UniqueName: \"kubernetes.io/projected/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b-kube-api-access-r2j7k\") on node \"crc\" DevicePath \"\"" Jan 29 16:45:03 crc kubenswrapper[4813]: I0129 16:45:03.436754 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" event={"ID":"05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b","Type":"ContainerDied","Data":"aff3c6e92fc46d3fd6f0996d0c3fe28456788bd40bd6d39fa7e1288723462b8a"} Jan 29 16:45:03 crc kubenswrapper[4813]: I0129 16:45:03.436804 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff3c6e92fc46d3fd6f0996d0c3fe28456788bd40bd6d39fa7e1288723462b8a" Jan 29 16:45:03 crc kubenswrapper[4813]: I0129 16:45:03.436811 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.441341 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd"] Jan 29 16:46:18 crc kubenswrapper[4813]: E0129 16:46:18.441955 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" containerName="collect-profiles" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.441967 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" containerName="collect-profiles" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.442077 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" containerName="collect-profiles" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.442749 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.445163 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.462687 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd"] Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.614887 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.614985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgz8l\" (UniqueName: \"kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.615034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.716602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.716658 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgz8l\" (UniqueName: \"kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.716687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.717267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.717282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.740572 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgz8l\" (UniqueName: \"kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.764684 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:18 crc kubenswrapper[4813]: I0129 16:46:18.930139 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd"] Jan 29 16:46:19 crc kubenswrapper[4813]: I0129 16:46:19.845938 4813 generic.go:334] "Generic (PLEG): container finished" podID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerID="8a4c30854dd0322947c6603dc3fcb9ba25bda5d505eaf5f6a1c03ded00398328" exitCode=0 Jan 29 16:46:19 crc kubenswrapper[4813]: I0129 16:46:19.846007 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" event={"ID":"aefb8fdb-ae07-4844-b7de-bf30c35e65d1","Type":"ContainerDied","Data":"8a4c30854dd0322947c6603dc3fcb9ba25bda5d505eaf5f6a1c03ded00398328"} Jan 29 16:46:19 crc kubenswrapper[4813]: I0129 16:46:19.846361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" event={"ID":"aefb8fdb-ae07-4844-b7de-bf30c35e65d1","Type":"ContainerStarted","Data":"3553d68dc43e5dc8c86f69ce0031b8cc45af94f7e061a96e058c1bbba684e31a"} Jan 29 16:46:21 crc kubenswrapper[4813]: I0129 16:46:21.858994 4813 generic.go:334] "Generic (PLEG): container finished" podID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerID="10590b229e7497e54601e33dd308a7c520da0219261de7211a0e359121e970e5" exitCode=0 Jan 29 16:46:21 crc kubenswrapper[4813]: I0129 16:46:21.859163 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" event={"ID":"aefb8fdb-ae07-4844-b7de-bf30c35e65d1","Type":"ContainerDied","Data":"10590b229e7497e54601e33dd308a7c520da0219261de7211a0e359121e970e5"} Jan 29 16:46:22 crc kubenswrapper[4813]: I0129 16:46:22.866983 4813 generic.go:334] "Generic (PLEG): container finished" podID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerID="47555d68ca3cd1f48d1846da983d0bd69849aae3a74ed23d05f95c80391b8ced" exitCode=0 Jan 29 16:46:22 crc kubenswrapper[4813]: I0129 16:46:22.867038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" event={"ID":"aefb8fdb-ae07-4844-b7de-bf30c35e65d1","Type":"ContainerDied","Data":"47555d68ca3cd1f48d1846da983d0bd69849aae3a74ed23d05f95c80391b8ced"} Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.086150 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.278494 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle\") pod \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.278583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgz8l\" (UniqueName: \"kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l\") pod \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.278657 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util\") pod \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\" (UID: \"aefb8fdb-ae07-4844-b7de-bf30c35e65d1\") " Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.279545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle" (OuterVolumeSpecName: "bundle") pod "aefb8fdb-ae07-4844-b7de-bf30c35e65d1" (UID: "aefb8fdb-ae07-4844-b7de-bf30c35e65d1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.289624 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l" (OuterVolumeSpecName: "kube-api-access-jgz8l") pod "aefb8fdb-ae07-4844-b7de-bf30c35e65d1" (UID: "aefb8fdb-ae07-4844-b7de-bf30c35e65d1"). InnerVolumeSpecName "kube-api-access-jgz8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.295395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util" (OuterVolumeSpecName: "util") pod "aefb8fdb-ae07-4844-b7de-bf30c35e65d1" (UID: "aefb8fdb-ae07-4844-b7de-bf30c35e65d1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.381080 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgz8l\" (UniqueName: \"kubernetes.io/projected/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-kube-api-access-jgz8l\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.381174 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.381193 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aefb8fdb-ae07-4844-b7de-bf30c35e65d1-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.883823 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" event={"ID":"aefb8fdb-ae07-4844-b7de-bf30c35e65d1","Type":"ContainerDied","Data":"3553d68dc43e5dc8c86f69ce0031b8cc45af94f7e061a96e058c1bbba684e31a"} Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.883888 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3553d68dc43e5dc8c86f69ce0031b8cc45af94f7e061a96e058c1bbba684e31a" Jan 29 16:46:24 crc kubenswrapper[4813]: I0129 16:46:24.883959 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.389226 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wl6z6"] Jan 29 16:46:29 crc kubenswrapper[4813]: E0129 16:46:29.389919 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="pull" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.389930 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="pull" Jan 29 16:46:29 crc kubenswrapper[4813]: E0129 16:46:29.389941 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="extract" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.389947 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="extract" Jan 29 16:46:29 crc kubenswrapper[4813]: E0129 16:46:29.389955 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="util" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.389962 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="util" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.390048 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefb8fdb-ae07-4844-b7de-bf30c35e65d1" containerName="extract" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.390425 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.392433 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-r6pg4" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.392456 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.392504 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.397611 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wl6z6"] Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.443059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2mz\" (UniqueName: \"kubernetes.io/projected/065cce61-9285-4d2c-846f-8c683898edd2-kube-api-access-tn2mz\") pod \"nmstate-operator-646758c888-wl6z6\" (UID: \"065cce61-9285-4d2c-846f-8c683898edd2\") " pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.543968 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2mz\" (UniqueName: \"kubernetes.io/projected/065cce61-9285-4d2c-846f-8c683898edd2-kube-api-access-tn2mz\") pod \"nmstate-operator-646758c888-wl6z6\" (UID: \"065cce61-9285-4d2c-846f-8c683898edd2\") " pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.564470 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2mz\" (UniqueName: \"kubernetes.io/projected/065cce61-9285-4d2c-846f-8c683898edd2-kube-api-access-tn2mz\") pod \"nmstate-operator-646758c888-wl6z6\" (UID: \"065cce61-9285-4d2c-846f-8c683898edd2\") " pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.708684 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" Jan 29 16:46:29 crc kubenswrapper[4813]: I0129 16:46:29.945364 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-wl6z6"] Jan 29 16:46:30 crc kubenswrapper[4813]: I0129 16:46:30.239814 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:46:30 crc kubenswrapper[4813]: I0129 16:46:30.239903 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:46:30 crc kubenswrapper[4813]: I0129 16:46:30.920353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" event={"ID":"065cce61-9285-4d2c-846f-8c683898edd2","Type":"ContainerStarted","Data":"a083d57c0e079f221a29f35ad06f1da31ae074a17d7f44d8c1b99fbcc67cadbb"} Jan 29 16:46:32 crc kubenswrapper[4813]: I0129 16:46:32.932019 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" event={"ID":"065cce61-9285-4d2c-846f-8c683898edd2","Type":"ContainerStarted","Data":"27acd862b6d67e3dbb167deda1b2321b2d52321077981aeeac99dd73f8031f78"} Jan 29 16:46:32 crc kubenswrapper[4813]: I0129 16:46:32.949284 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-wl6z6" podStartSLOduration=1.826839595 podStartE2EDuration="3.949263393s" podCreationTimestamp="2026-01-29 16:46:29 +0000 UTC" firstStartedPulling="2026-01-29 16:46:29.962553547 +0000 UTC m=+1042.449756773" lastFinishedPulling="2026-01-29 16:46:32.084977355 +0000 UTC m=+1044.572180571" observedRunningTime="2026-01-29 16:46:32.947419439 +0000 UTC m=+1045.434622665" watchObservedRunningTime="2026-01-29 16:46:32.949263393 +0000 UTC m=+1045.436466609" Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.875877 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6rnpf"] Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.877372 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.881171 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-l59lc" Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.887316 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6rnpf"] Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.896443 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb"] Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.897138 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.906937 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.918329 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb"] Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.931131 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lwkbj"] Jan 29 16:46:33 crc kubenswrapper[4813]: I0129 16:46:33.932062 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.000924 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp2lw\" (UniqueName: \"kubernetes.io/projected/b90fa914-925f-456e-8466-53c5bb0c4464-kube-api-access-rp2lw\") pod \"nmstate-metrics-54757c584b-6rnpf\" (UID: \"b90fa914-925f-456e-8466-53c5bb0c4464\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.001735 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.001930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgn6\" (UniqueName: \"kubernetes.io/projected/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-kube-api-access-qtgn6\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.054911 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.055767 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.058272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.058866 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.059500 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zcjrf" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.069537 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9nh\" (UniqueName: \"kubernetes.io/projected/008d4499-bdca-4247-9270-0f178a379a30-kube-api-access-rn9nh\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-dbus-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103534 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-ovs-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103742 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-nmstate-lock\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103822 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgn6\" (UniqueName: \"kubernetes.io/projected/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-kube-api-access-qtgn6\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.103864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp2lw\" (UniqueName: \"kubernetes.io/projected/b90fa914-925f-456e-8466-53c5bb0c4464-kube-api-access-rp2lw\") pod \"nmstate-metrics-54757c584b-6rnpf\" (UID: \"b90fa914-925f-456e-8466-53c5bb0c4464\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" Jan 29 16:46:34 crc kubenswrapper[4813]: E0129 16:46:34.104153 4813 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 16:46:34 crc kubenswrapper[4813]: E0129 16:46:34.104217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair podName:9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5 nodeName:}" failed. No retries permitted until 2026-01-29 16:46:34.60420158 +0000 UTC m=+1047.091404796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-xqdsb" (UID: "9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5") : secret "openshift-nmstate-webhook" not found Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.122897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgn6\" (UniqueName: \"kubernetes.io/projected/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-kube-api-access-qtgn6\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.123400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp2lw\" (UniqueName: \"kubernetes.io/projected/b90fa914-925f-456e-8466-53c5bb0c4464-kube-api-access-rp2lw\") pod \"nmstate-metrics-54757c584b-6rnpf\" (UID: \"b90fa914-925f-456e-8466-53c5bb0c4464\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.196685 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.205859 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.205934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.205971 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn9nh\" (UniqueName: \"kubernetes.io/projected/008d4499-bdca-4247-9270-0f178a379a30-kube-api-access-rn9nh\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206147 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-dbus-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206212 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94fl\" (UniqueName: \"kubernetes.io/projected/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-kube-api-access-b94fl\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-ovs-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-nmstate-lock\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206475 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-nmstate-lock\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206489 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-dbus-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.206495 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/008d4499-bdca-4247-9270-0f178a379a30-ovs-socket\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.226833 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn9nh\" (UniqueName: \"kubernetes.io/projected/008d4499-bdca-4247-9270-0f178a379a30-kube-api-access-rn9nh\") pod \"nmstate-handler-lwkbj\" (UID: \"008d4499-bdca-4247-9270-0f178a379a30\") " pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.266696 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.309045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.309089 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.309144 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94fl\" (UniqueName: \"kubernetes.io/projected/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-kube-api-access-b94fl\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.310961 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.331613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.338148 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69f7f76474-dnp2c"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.338806 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.338820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94fl\" (UniqueName: \"kubernetes.io/projected/eabd7875-0bba-4ea2-9fec-b87ddb267bd3-kube-api-access-b94fl\") pod \"nmstate-console-plugin-7754f76f8b-fvjfk\" (UID: \"eabd7875-0bba-4ea2-9fec-b87ddb267bd3\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.352624 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f7f76474-dnp2c"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.374231 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.447483 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6rnpf"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-oauth-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-service-ca\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511260 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpmz\" (UniqueName: \"kubernetes.io/projected/57034188-ab88-45dc-839c-740504a522f7-kube-api-access-6hpmz\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511283 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-oauth-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511335 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-console-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511363 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-trusted-ca-bundle\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.511397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.572724 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk"] Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.612627 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.612684 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-oauth-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.612702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-service-ca\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.612730 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.612770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpmz\" (UniqueName: \"kubernetes.io/projected/57034188-ab88-45dc-839c-740504a522f7-kube-api-access-6hpmz\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.613309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-oauth-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.613352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-console-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.613385 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-trusted-ca-bundle\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.614046 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-oauth-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.614097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-console-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.614457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-service-ca\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.615540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57034188-ab88-45dc-839c-740504a522f7-trusted-ca-bundle\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.617006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-xqdsb\" (UID: \"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.619043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-oauth-config\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.621258 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/57034188-ab88-45dc-839c-740504a522f7-console-serving-cert\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.630225 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpmz\" (UniqueName: \"kubernetes.io/projected/57034188-ab88-45dc-839c-740504a522f7-kube-api-access-6hpmz\") pod \"console-69f7f76474-dnp2c\" (UID: \"57034188-ab88-45dc-839c-740504a522f7\") " pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.661222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.818657 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.862247 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69f7f76474-dnp2c"] Jan 29 16:46:34 crc kubenswrapper[4813]: W0129 16:46:34.869405 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57034188_ab88_45dc_839c_740504a522f7.slice/crio-ff5b7a0dadc5f37a329fea50218f6039568d9fd3958c506c5864f3f17692af8b WatchSource:0}: Error finding container ff5b7a0dadc5f37a329fea50218f6039568d9fd3958c506c5864f3f17692af8b: Status 404 returned error can't find the container with id ff5b7a0dadc5f37a329fea50218f6039568d9fd3958c506c5864f3f17692af8b Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.946209 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" event={"ID":"b90fa914-925f-456e-8466-53c5bb0c4464","Type":"ContainerStarted","Data":"d6e9dbe85f02e1b29fac56da038ac3cffab26ba1c63e1e365e6ba5146c4178d2"} Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.947656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lwkbj" event={"ID":"008d4499-bdca-4247-9270-0f178a379a30","Type":"ContainerStarted","Data":"c3a77474f14b7d4679402572297259c9dd813806c6f1da04783e5ddee3875e2a"} Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.948736 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f7f76474-dnp2c" event={"ID":"57034188-ab88-45dc-839c-740504a522f7","Type":"ContainerStarted","Data":"ff5b7a0dadc5f37a329fea50218f6039568d9fd3958c506c5864f3f17692af8b"} Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.949884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" event={"ID":"eabd7875-0bba-4ea2-9fec-b87ddb267bd3","Type":"ContainerStarted","Data":"ab8f2dd7063231552285767f9e0a0619822aa11b904c543a9c94a82b99e7390e"} Jan 29 16:46:34 crc kubenswrapper[4813]: I0129 16:46:34.998730 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb"] Jan 29 16:46:35 crc kubenswrapper[4813]: W0129 16:46:35.001167 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b92c1e6_a66c_4d10_bf5d_fd8ccd18a7a5.slice/crio-a41a55f40f37ddd6ee2e509c55f2788f91e65f74567ea2d13a4ff5de096aff08 WatchSource:0}: Error finding container a41a55f40f37ddd6ee2e509c55f2788f91e65f74567ea2d13a4ff5de096aff08: Status 404 returned error can't find the container with id a41a55f40f37ddd6ee2e509c55f2788f91e65f74567ea2d13a4ff5de096aff08 Jan 29 16:46:35 crc kubenswrapper[4813]: I0129 16:46:35.961557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" event={"ID":"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5","Type":"ContainerStarted","Data":"a41a55f40f37ddd6ee2e509c55f2788f91e65f74567ea2d13a4ff5de096aff08"} Jan 29 16:46:35 crc kubenswrapper[4813]: I0129 16:46:35.963315 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69f7f76474-dnp2c" event={"ID":"57034188-ab88-45dc-839c-740504a522f7","Type":"ContainerStarted","Data":"5af8baaf4af997d08a97c871040525560e71dc84318f3a350fd42bb80fb79e21"} Jan 29 16:46:35 crc kubenswrapper[4813]: I0129 16:46:35.989059 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69f7f76474-dnp2c" podStartSLOduration=1.989036822 podStartE2EDuration="1.989036822s" podCreationTimestamp="2026-01-29 16:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:46:35.98316931 +0000 UTC m=+1048.470372536" watchObservedRunningTime="2026-01-29 16:46:35.989036822 +0000 UTC m=+1048.476240038" Jan 29 16:46:37 crc kubenswrapper[4813]: I0129 16:46:37.974887 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" event={"ID":"9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5","Type":"ContainerStarted","Data":"78e50914439ea724644a8898307ac39d2cb37942ef68fdc6f99bcb9b1dcdb96f"} Jan 29 16:46:37 crc kubenswrapper[4813]: I0129 16:46:37.975263 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:46:37 crc kubenswrapper[4813]: I0129 16:46:37.977520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" event={"ID":"b90fa914-925f-456e-8466-53c5bb0c4464","Type":"ContainerStarted","Data":"fb4b0a520ec6663a9e4d9ef2f0ddef671d0712974cc1024ff988f81427a040c7"} Jan 29 16:46:37 crc kubenswrapper[4813]: I0129 16:46:37.978916 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" event={"ID":"eabd7875-0bba-4ea2-9fec-b87ddb267bd3","Type":"ContainerStarted","Data":"1799c3b5d09b1a91fe6b7a2b5d4de87c15cce1843093b8f4beef10bfe1e7064c"} Jan 29 16:46:37 crc kubenswrapper[4813]: I0129 16:46:37.989538 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" podStartSLOduration=2.326782219 podStartE2EDuration="4.989522289s" podCreationTimestamp="2026-01-29 16:46:33 +0000 UTC" firstStartedPulling="2026-01-29 16:46:35.002900636 +0000 UTC m=+1047.490103852" lastFinishedPulling="2026-01-29 16:46:37.665640706 +0000 UTC m=+1050.152843922" observedRunningTime="2026-01-29 16:46:37.988794118 +0000 UTC m=+1050.475997334" watchObservedRunningTime="2026-01-29 16:46:37.989522289 +0000 UTC m=+1050.476725505" Jan 29 16:46:38 crc kubenswrapper[4813]: I0129 16:46:38.011532 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-fvjfk" podStartSLOduration=0.988221915 podStartE2EDuration="4.011504653s" podCreationTimestamp="2026-01-29 16:46:34 +0000 UTC" firstStartedPulling="2026-01-29 16:46:34.586711929 +0000 UTC m=+1047.073915145" lastFinishedPulling="2026-01-29 16:46:37.609994667 +0000 UTC m=+1050.097197883" observedRunningTime="2026-01-29 16:46:38.01140283 +0000 UTC m=+1050.498606046" watchObservedRunningTime="2026-01-29 16:46:38.011504653 +0000 UTC m=+1050.498707899" Jan 29 16:46:39 crc kubenswrapper[4813]: I0129 16:46:39.995240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" event={"ID":"b90fa914-925f-456e-8466-53c5bb0c4464","Type":"ContainerStarted","Data":"70617c363d2a99fe85ea803d78ee1eb6bd1fbc3fde83e0e054a21a81fbaf49d8"} Jan 29 16:46:40 crc kubenswrapper[4813]: I0129 16:46:40.012592 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-6rnpf" podStartSLOduration=1.67311202 podStartE2EDuration="7.012572708s" podCreationTimestamp="2026-01-29 16:46:33 +0000 UTC" firstStartedPulling="2026-01-29 16:46:34.454827698 +0000 UTC m=+1046.942030914" lastFinishedPulling="2026-01-29 16:46:39.794288386 +0000 UTC m=+1052.281491602" observedRunningTime="2026-01-29 16:46:40.009636492 +0000 UTC m=+1052.496839738" watchObservedRunningTime="2026-01-29 16:46:40.012572708 +0000 UTC m=+1052.499775924" Jan 29 16:46:44 crc kubenswrapper[4813]: I0129 16:46:44.662309 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:44 crc kubenswrapper[4813]: I0129 16:46:44.662680 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:44 crc kubenswrapper[4813]: I0129 16:46:44.669271 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:45 crc kubenswrapper[4813]: I0129 16:46:45.028523 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69f7f76474-dnp2c" Jan 29 16:46:45 crc kubenswrapper[4813]: I0129 16:46:45.083926 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:46:46 crc kubenswrapper[4813]: I0129 16:46:46.030277 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lwkbj" event={"ID":"008d4499-bdca-4247-9270-0f178a379a30","Type":"ContainerStarted","Data":"4e2cc52295327a9a5937bf814084f9a1bf17d95f0b1a53ddd4d3203dd5ae169c"} Jan 29 16:46:46 crc kubenswrapper[4813]: I0129 16:46:46.048822 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lwkbj" podStartSLOduration=1.851254446 podStartE2EDuration="13.048801505s" podCreationTimestamp="2026-01-29 16:46:33 +0000 UTC" firstStartedPulling="2026-01-29 16:46:34.326557722 +0000 UTC m=+1046.813760938" lastFinishedPulling="2026-01-29 16:46:45.524104781 +0000 UTC m=+1058.011307997" observedRunningTime="2026-01-29 16:46:46.04692692 +0000 UTC m=+1058.534130136" watchObservedRunningTime="2026-01-29 16:46:46.048801505 +0000 UTC m=+1058.536004721" Jan 29 16:46:47 crc kubenswrapper[4813]: I0129 16:46:47.036358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:54 crc kubenswrapper[4813]: I0129 16:46:54.286881 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lwkbj" Jan 29 16:46:54 crc kubenswrapper[4813]: I0129 16:46:54.824843 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-xqdsb" Jan 29 16:47:00 crc kubenswrapper[4813]: I0129 16:47:00.240147 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:00 crc kubenswrapper[4813]: I0129 16:47:00.240719 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.147383 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs"] Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.149075 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.151199 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.161018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs"] Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.268480 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzxf\" (UniqueName: \"kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.268556 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.268732 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.371460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvzxf\" (UniqueName: \"kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.371565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.371610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.372388 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.372431 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.393272 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvzxf\" (UniqueName: \"kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.485745 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.493993 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:08 crc kubenswrapper[4813]: I0129 16:47:08.718320 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs"] Jan 29 16:47:08 crc kubenswrapper[4813]: W0129 16:47:08.725101 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d047ac7_886a_419d_bbd1_42a1ee103641.slice/crio-288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b WatchSource:0}: Error finding container 288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b: Status 404 returned error can't find the container with id 288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b Jan 29 16:47:09 crc kubenswrapper[4813]: I0129 16:47:09.159221 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerStarted","Data":"ecf9381d7ff5f5ac9187bec758e6f03121b4475692b654619ee9cee6db6926af"} Jan 29 16:47:09 crc kubenswrapper[4813]: I0129 16:47:09.159267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerStarted","Data":"288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b"} Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.125364 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-f9sf8" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" containerID="cri-o://d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686" gracePeriod=15 Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.167334 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerID="ecf9381d7ff5f5ac9187bec758e6f03121b4475692b654619ee9cee6db6926af" exitCode=0 Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.167389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerDied","Data":"ecf9381d7ff5f5ac9187bec758e6f03121b4475692b654619ee9cee6db6926af"} Jan 29 16:47:10 crc kubenswrapper[4813]: E0129 16:47:10.336862 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a897da4_3d6d_41f6_9fea_695b30bcd6f7.slice/crio-conmon-d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.688039 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f9sf8_0a897da4-3d6d-41f6-9fea-695b30bcd6f7/console/0.log" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.688693 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.818392 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.818487 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxmwm\" (UniqueName: \"kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.818533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819288 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert\") pod \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\" (UID: \"0a897da4-3d6d-41f6-9fea-695b30bcd6f7\") " Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819598 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config" (OuterVolumeSpecName: "console-config") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819691 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca" (OuterVolumeSpecName: "service-ca") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.819758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.820221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.820645 4813 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.820677 4813 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.820699 4813 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.820710 4813 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.826962 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm" (OuterVolumeSpecName: "kube-api-access-wxmwm") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "kube-api-access-wxmwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.827878 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.831424 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0a897da4-3d6d-41f6-9fea-695b30bcd6f7" (UID: "0a897da4-3d6d-41f6-9fea-695b30bcd6f7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.921537 4813 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.921568 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxmwm\" (UniqueName: \"kubernetes.io/projected/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-kube-api-access-wxmwm\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:10 crc kubenswrapper[4813]: I0129 16:47:10.921581 4813 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a897da4-3d6d-41f6-9fea-695b30bcd6f7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.174872 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f9sf8_0a897da4-3d6d-41f6-9fea-695b30bcd6f7/console/0.log" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.174952 4813 generic.go:334] "Generic (PLEG): container finished" podID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerID="d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686" exitCode=2 Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.174994 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f9sf8" event={"ID":"0a897da4-3d6d-41f6-9fea-695b30bcd6f7","Type":"ContainerDied","Data":"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686"} Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.175038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f9sf8" event={"ID":"0a897da4-3d6d-41f6-9fea-695b30bcd6f7","Type":"ContainerDied","Data":"75f94ad76f45869a70de3e68ad2aeea010b7e6fb00e6bbe9d49b992d6f69524c"} Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.175040 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f9sf8" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.175140 4813 scope.go:117] "RemoveContainer" containerID="d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.193249 4813 scope.go:117] "RemoveContainer" containerID="d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686" Jan 29 16:47:11 crc kubenswrapper[4813]: E0129 16:47:11.193655 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686\": container with ID starting with d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686 not found: ID does not exist" containerID="d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.193737 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686"} err="failed to get container status \"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686\": rpc error: code = NotFound desc = could not find container \"d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686\": container with ID starting with d914a46cae64b2bc430035c4e062604a2fd50ddbf84f7b5c23dd4ef67c506686 not found: ID does not exist" Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.210225 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:47:11 crc kubenswrapper[4813]: I0129 16:47:11.214845 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-f9sf8"] Jan 29 16:47:12 crc kubenswrapper[4813]: I0129 16:47:12.250675 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" path="/var/lib/kubelet/pods/0a897da4-3d6d-41f6-9fea-695b30bcd6f7/volumes" Jan 29 16:47:14 crc kubenswrapper[4813]: I0129 16:47:14.195428 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerID="7f36dbefabce9919c40ba8a3bab5b62978a5b0996e4817f7c9bdec50e03d70b3" exitCode=0 Jan 29 16:47:14 crc kubenswrapper[4813]: I0129 16:47:14.195474 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerDied","Data":"7f36dbefabce9919c40ba8a3bab5b62978a5b0996e4817f7c9bdec50e03d70b3"} Jan 29 16:47:15 crc kubenswrapper[4813]: I0129 16:47:15.202872 4813 generic.go:334] "Generic (PLEG): container finished" podID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerID="56602b68314f3c88f980c39fbfdcba81c5d9eba984041aca563b25f823f18f8f" exitCode=0 Jan 29 16:47:15 crc kubenswrapper[4813]: I0129 16:47:15.202998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerDied","Data":"56602b68314f3c88f980c39fbfdcba81c5d9eba984041aca563b25f823f18f8f"} Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.450083 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.602034 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util\") pod \"0d047ac7-886a-419d-bbd1-42a1ee103641\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.602147 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle\") pod \"0d047ac7-886a-419d-bbd1-42a1ee103641\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.602193 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvzxf\" (UniqueName: \"kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf\") pod \"0d047ac7-886a-419d-bbd1-42a1ee103641\" (UID: \"0d047ac7-886a-419d-bbd1-42a1ee103641\") " Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.605531 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle" (OuterVolumeSpecName: "bundle") pod "0d047ac7-886a-419d-bbd1-42a1ee103641" (UID: "0d047ac7-886a-419d-bbd1-42a1ee103641"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.609981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf" (OuterVolumeSpecName: "kube-api-access-zvzxf") pod "0d047ac7-886a-419d-bbd1-42a1ee103641" (UID: "0d047ac7-886a-419d-bbd1-42a1ee103641"). InnerVolumeSpecName "kube-api-access-zvzxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.614147 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util" (OuterVolumeSpecName: "util") pod "0d047ac7-886a-419d-bbd1-42a1ee103641" (UID: "0d047ac7-886a-419d-bbd1-42a1ee103641"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.703077 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.703115 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvzxf\" (UniqueName: \"kubernetes.io/projected/0d047ac7-886a-419d-bbd1-42a1ee103641-kube-api-access-zvzxf\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:16 crc kubenswrapper[4813]: I0129 16:47:16.703143 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0d047ac7-886a-419d-bbd1-42a1ee103641-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:47:17 crc kubenswrapper[4813]: I0129 16:47:17.218534 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" event={"ID":"0d047ac7-886a-419d-bbd1-42a1ee103641","Type":"ContainerDied","Data":"288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b"} Jan 29 16:47:17 crc kubenswrapper[4813]: I0129 16:47:17.218599 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288a6c5df5ade452b48a7a4abd361521486d90289e2588686890c98b22ab140b" Jan 29 16:47:17 crc kubenswrapper[4813]: I0129 16:47:17.219089 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.647824 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc"] Jan 29 16:47:26 crc kubenswrapper[4813]: E0129 16:47:26.648586 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648602 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" Jan 29 16:47:26 crc kubenswrapper[4813]: E0129 16:47:26.648612 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="pull" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648618 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="pull" Jan 29 16:47:26 crc kubenswrapper[4813]: E0129 16:47:26.648636 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="extract" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648642 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="extract" Jan 29 16:47:26 crc kubenswrapper[4813]: E0129 16:47:26.648649 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="util" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648655 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="util" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648750 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a897da4-3d6d-41f6-9fea-695b30bcd6f7" containerName="console" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.648759 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d047ac7-886a-419d-bbd1-42a1ee103641" containerName="extract" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.649190 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.654511 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.654614 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.654666 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-b7scx" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.654738 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.654744 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.710212 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc"] Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.727632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-apiservice-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.727685 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-webhook-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.727772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79h2f\" (UniqueName: \"kubernetes.io/projected/4f2eda5a-821a-4563-b99b-01b197b48993-kube-api-access-79h2f\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.828513 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-apiservice-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.828581 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-webhook-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.828666 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79h2f\" (UniqueName: \"kubernetes.io/projected/4f2eda5a-821a-4563-b99b-01b197b48993-kube-api-access-79h2f\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.850178 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-webhook-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.854704 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f2eda5a-821a-4563-b99b-01b197b48993-apiservice-cert\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.863689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79h2f\" (UniqueName: \"kubernetes.io/projected/4f2eda5a-821a-4563-b99b-01b197b48993-kube-api-access-79h2f\") pod \"metallb-operator-controller-manager-68bd5b494f-lprbc\" (UID: \"4f2eda5a-821a-4563-b99b-01b197b48993\") " pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.900773 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-769db779b-hnn5b"] Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.901446 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.903217 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.903555 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.903822 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-r6kf5" Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.914225 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-769db779b-hnn5b"] Jan 29 16:47:26 crc kubenswrapper[4813]: I0129 16:47:26.968013 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.031777 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhwpk\" (UniqueName: \"kubernetes.io/projected/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-kube-api-access-hhwpk\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.032090 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-apiservice-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.032271 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-webhook-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.134653 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-webhook-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.135204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhwpk\" (UniqueName: \"kubernetes.io/projected/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-kube-api-access-hhwpk\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.135238 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-apiservice-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.142190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-apiservice-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.142219 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-webhook-cert\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.167886 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhwpk\" (UniqueName: \"kubernetes.io/projected/d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6-kube-api-access-hhwpk\") pod \"metallb-operator-webhook-server-769db779b-hnn5b\" (UID: \"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6\") " pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.217870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.445841 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-769db779b-hnn5b"] Jan 29 16:47:27 crc kubenswrapper[4813]: I0129 16:47:27.487407 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc"] Jan 29 16:47:27 crc kubenswrapper[4813]: W0129 16:47:27.491851 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2eda5a_821a_4563_b99b_01b197b48993.slice/crio-a21ca7a5061e1117527f288c4b9e9289008308379e14bc4e781bb2380f4e8974 WatchSource:0}: Error finding container a21ca7a5061e1117527f288c4b9e9289008308379e14bc4e781bb2380f4e8974: Status 404 returned error can't find the container with id a21ca7a5061e1117527f288c4b9e9289008308379e14bc4e781bb2380f4e8974 Jan 29 16:47:28 crc kubenswrapper[4813]: I0129 16:47:28.285225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" event={"ID":"4f2eda5a-821a-4563-b99b-01b197b48993","Type":"ContainerStarted","Data":"a21ca7a5061e1117527f288c4b9e9289008308379e14bc4e781bb2380f4e8974"} Jan 29 16:47:28 crc kubenswrapper[4813]: I0129 16:47:28.287546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" event={"ID":"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6","Type":"ContainerStarted","Data":"c07591ce6f0cadbbaf35eaa34981382bb9451c2297b472a889061d613f36bb12"} Jan 29 16:47:30 crc kubenswrapper[4813]: I0129 16:47:30.239860 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:47:30 crc kubenswrapper[4813]: I0129 16:47:30.240208 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:47:30 crc kubenswrapper[4813]: I0129 16:47:30.251075 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:47:30 crc kubenswrapper[4813]: I0129 16:47:30.252191 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:47:30 crc kubenswrapper[4813]: I0129 16:47:30.252295 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8" gracePeriod=600 Jan 29 16:47:31 crc kubenswrapper[4813]: I0129 16:47:31.315267 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8" exitCode=0 Jan 29 16:47:31 crc kubenswrapper[4813]: I0129 16:47:31.315327 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8"} Jan 29 16:47:31 crc kubenswrapper[4813]: I0129 16:47:31.315376 4813 scope.go:117] "RemoveContainer" containerID="5309aa3c6552f400eb634ed59cfbbbe91e4b5c5d730cde7759bc71dc4f1aa28f" Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.334211 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" event={"ID":"d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6","Type":"ContainerStarted","Data":"6ed08d6518bd18487ef18c8bd39fec626c20d7edb372ed1b5255334bc3feed4c"} Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.334770 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.337579 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d"} Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.339556 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" event={"ID":"4f2eda5a-821a-4563-b99b-01b197b48993","Type":"ContainerStarted","Data":"c452794215bf6ae677a3a6ea6035d06a170a072aabca4ca424111366de44035d"} Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.339688 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.358339 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" podStartSLOduration=2.408722839 podStartE2EDuration="7.358322485s" podCreationTimestamp="2026-01-29 16:47:26 +0000 UTC" firstStartedPulling="2026-01-29 16:47:27.459755346 +0000 UTC m=+1099.946958562" lastFinishedPulling="2026-01-29 16:47:32.409354952 +0000 UTC m=+1104.896558208" observedRunningTime="2026-01-29 16:47:33.35536275 +0000 UTC m=+1105.842565966" watchObservedRunningTime="2026-01-29 16:47:33.358322485 +0000 UTC m=+1105.845525701" Jan 29 16:47:33 crc kubenswrapper[4813]: I0129 16:47:33.396351 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" podStartSLOduration=2.502058637 podStartE2EDuration="7.3963318s" podCreationTimestamp="2026-01-29 16:47:26 +0000 UTC" firstStartedPulling="2026-01-29 16:47:27.497265715 +0000 UTC m=+1099.984468921" lastFinishedPulling="2026-01-29 16:47:32.391538868 +0000 UTC m=+1104.878742084" observedRunningTime="2026-01-29 16:47:33.395135706 +0000 UTC m=+1105.882338942" watchObservedRunningTime="2026-01-29 16:47:33.3963318 +0000 UTC m=+1105.883535016" Jan 29 16:47:47 crc kubenswrapper[4813]: I0129 16:47:47.224209 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-769db779b-hnn5b" Jan 29 16:48:06 crc kubenswrapper[4813]: I0129 16:48:06.969988 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68bd5b494f-lprbc" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.625320 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xfmps"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.627816 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.629576 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.629839 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-q9fh5" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.630002 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.648728 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.649871 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.652669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.661378 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.710950 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nf7rq"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.711978 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.715295 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.715687 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.715753 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.719415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rl2d6" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.735393 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6nftd"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.736240 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748815 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0a80701-75b8-47bd-a5ca-a17d911b3d05-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748877 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-reloader\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748905 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-sockets\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748931 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-startup\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748963 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9mw\" (UniqueName: \"kubernetes.io/projected/6df3ba9c-ad21-4805-bee6-eaa997d79d87-kube-api-access-8q9mw\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.748987 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics-certs\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.749146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-conf\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.749231 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.749256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtvk\" (UniqueName: \"kubernetes.io/projected/e0a80701-75b8-47bd-a5ca-a17d911b3d05-kube-api-access-6rtvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.751525 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6nftd"] Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.752723 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-sockets\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-startup\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850786 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9mw\" (UniqueName: \"kubernetes.io/projected/6df3ba9c-ad21-4805-bee6-eaa997d79d87-kube-api-access-8q9mw\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850835 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics-certs\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2p2l\" (UniqueName: \"kubernetes.io/projected/2b3d4378-fe4d-4a46-8a43-c66518db31e0-kube-api-access-f2p2l\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850891 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-conf\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metallb-excludel2\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850925 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850953 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-cert\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.850992 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.851016 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtvk\" (UniqueName: \"kubernetes.io/projected/e0a80701-75b8-47bd-a5ca-a17d911b3d05-kube-api-access-6rtvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.851042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0a80701-75b8-47bd-a5ca-a17d911b3d05-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.851071 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl8j\" (UniqueName: \"kubernetes.io/projected/98ca8c84-696a-4a20-a620-beebb81a4d9b-kube-api-access-8jl8j\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.851094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-reloader\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.851836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-reloader\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.852090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-conf\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.852158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.852686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-sockets\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.853487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6df3ba9c-ad21-4805-bee6-eaa997d79d87-frr-startup\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.861712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6df3ba9c-ad21-4805-bee6-eaa997d79d87-metrics-certs\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.867630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0a80701-75b8-47bd-a5ca-a17d911b3d05-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.870088 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtvk\" (UniqueName: \"kubernetes.io/projected/e0a80701-75b8-47bd-a5ca-a17d911b3d05-kube-api-access-6rtvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-sswwk\" (UID: \"e0a80701-75b8-47bd-a5ca-a17d911b3d05\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.877691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9mw\" (UniqueName: \"kubernetes.io/projected/6df3ba9c-ad21-4805-bee6-eaa997d79d87-kube-api-access-8q9mw\") pod \"frr-k8s-xfmps\" (UID: \"6df3ba9c-ad21-4805-bee6-eaa997d79d87\") " pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.947936 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959764 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2p2l\" (UniqueName: \"kubernetes.io/projected/2b3d4378-fe4d-4a46-8a43-c66518db31e0-kube-api-access-f2p2l\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metallb-excludel2\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959831 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-cert\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.959882 4813 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl8j\" (UniqueName: \"kubernetes.io/projected/98ca8c84-696a-4a20-a620-beebb81a4d9b-kube-api-access-8jl8j\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.959955 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs podName:2b3d4378-fe4d-4a46-8a43-c66518db31e0 nodeName:}" failed. No retries permitted until 2026-01-29 16:48:08.459935139 +0000 UTC m=+1140.947138355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs") pod "speaker-nf7rq" (UID: "2b3d4378-fe4d-4a46-8a43-c66518db31e0") : secret "speaker-certs-secret" not found Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.959980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.959985 4813 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.960123 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs podName:98ca8c84-696a-4a20-a620-beebb81a4d9b nodeName:}" failed. No retries permitted until 2026-01-29 16:48:08.460081924 +0000 UTC m=+1140.947285200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs") pod "controller-6968d8fdc4-6nftd" (UID: "98ca8c84-696a-4a20-a620-beebb81a4d9b") : secret "controller-certs-secret" not found Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.960230 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:48:07 crc kubenswrapper[4813]: E0129 16:48:07.960265 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist podName:2b3d4378-fe4d-4a46-8a43-c66518db31e0 nodeName:}" failed. No retries permitted until 2026-01-29 16:48:08.460254219 +0000 UTC m=+1140.947457445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist") pod "speaker-nf7rq" (UID: "2b3d4378-fe4d-4a46-8a43-c66518db31e0") : secret "metallb-memberlist" not found Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.960796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metallb-excludel2\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.966598 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:07 crc kubenswrapper[4813]: I0129 16:48:07.971609 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-cert\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.006690 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2p2l\" (UniqueName: \"kubernetes.io/projected/2b3d4378-fe4d-4a46-8a43-c66518db31e0-kube-api-access-f2p2l\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.032990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl8j\" (UniqueName: \"kubernetes.io/projected/98ca8c84-696a-4a20-a620-beebb81a4d9b-kube-api-access-8jl8j\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.214210 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.260538 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk"] Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.468670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.468832 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:08 crc kubenswrapper[4813]: E0129 16:48:08.468860 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:48:08 crc kubenswrapper[4813]: E0129 16:48:08.468970 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist podName:2b3d4378-fe4d-4a46-8a43-c66518db31e0 nodeName:}" failed. No retries permitted until 2026-01-29 16:48:09.468944481 +0000 UTC m=+1141.956147747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist") pod "speaker-nf7rq" (UID: "2b3d4378-fe4d-4a46-8a43-c66518db31e0") : secret "metallb-memberlist" not found Jan 29 16:48:08 crc kubenswrapper[4813]: E0129 16:48:08.468981 4813 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 16:48:08 crc kubenswrapper[4813]: E0129 16:48:08.469043 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs podName:98ca8c84-696a-4a20-a620-beebb81a4d9b nodeName:}" failed. No retries permitted until 2026-01-29 16:48:09.469022243 +0000 UTC m=+1141.956225459 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs") pod "controller-6968d8fdc4-6nftd" (UID: "98ca8c84-696a-4a20-a620-beebb81a4d9b") : secret "controller-certs-secret" not found Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.468879 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.474218 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-metrics-certs\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.538611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" event={"ID":"e0a80701-75b8-47bd-a5ca-a17d911b3d05","Type":"ContainerStarted","Data":"bbfcfe8e909e856271a0e085fa9f8251a97cc34947c17980d9fe2c4cf82393ec"} Jan 29 16:48:08 crc kubenswrapper[4813]: I0129 16:48:08.539833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"180ad194dd5731608749fc36e485080415e64620b79276229aa7f08b4e38deb7"} Jan 29 16:48:09 crc kubenswrapper[4813]: I0129 16:48:09.484561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:09 crc kubenswrapper[4813]: I0129 16:48:09.485033 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:09 crc kubenswrapper[4813]: E0129 16:48:09.485172 4813 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 16:48:09 crc kubenswrapper[4813]: E0129 16:48:09.485226 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist podName:2b3d4378-fe4d-4a46-8a43-c66518db31e0 nodeName:}" failed. No retries permitted until 2026-01-29 16:48:11.485209484 +0000 UTC m=+1143.972412700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist") pod "speaker-nf7rq" (UID: "2b3d4378-fe4d-4a46-8a43-c66518db31e0") : secret "metallb-memberlist" not found Jan 29 16:48:09 crc kubenswrapper[4813]: I0129 16:48:09.489697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98ca8c84-696a-4a20-a620-beebb81a4d9b-metrics-certs\") pod \"controller-6968d8fdc4-6nftd\" (UID: \"98ca8c84-696a-4a20-a620-beebb81a4d9b\") " pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:09 crc kubenswrapper[4813]: I0129 16:48:09.549917 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.026995 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6nftd"] Jan 29 16:48:10 crc kubenswrapper[4813]: W0129 16:48:10.088041 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ca8c84_696a_4a20_a620_beebb81a4d9b.slice/crio-bd6f16c385b7c6457f1af11dd7e86f8ad41597a02e841c373ec8b673aac4dc46 WatchSource:0}: Error finding container bd6f16c385b7c6457f1af11dd7e86f8ad41597a02e841c373ec8b673aac4dc46: Status 404 returned error can't find the container with id bd6f16c385b7c6457f1af11dd7e86f8ad41597a02e841c373ec8b673aac4dc46 Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.554371 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6nftd" event={"ID":"98ca8c84-696a-4a20-a620-beebb81a4d9b","Type":"ContainerStarted","Data":"5fff848c07ba76f9300c68807b6543a534e9cf2318977438210180d3b42b1c21"} Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.554715 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.554735 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6nftd" event={"ID":"98ca8c84-696a-4a20-a620-beebb81a4d9b","Type":"ContainerStarted","Data":"086b17be9970aa75aaf7c1568535d5fa4bb5aa7511411dcdaad9dfae9b00fb48"} Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.554750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6nftd" event={"ID":"98ca8c84-696a-4a20-a620-beebb81a4d9b","Type":"ContainerStarted","Data":"bd6f16c385b7c6457f1af11dd7e86f8ad41597a02e841c373ec8b673aac4dc46"} Jan 29 16:48:10 crc kubenswrapper[4813]: I0129 16:48:10.584153 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6nftd" podStartSLOduration=3.58413117 podStartE2EDuration="3.58413117s" podCreationTimestamp="2026-01-29 16:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:48:10.58275264 +0000 UTC m=+1143.069955856" watchObservedRunningTime="2026-01-29 16:48:10.58413117 +0000 UTC m=+1143.071334386" Jan 29 16:48:11 crc kubenswrapper[4813]: I0129 16:48:11.515755 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:11 crc kubenswrapper[4813]: I0129 16:48:11.525861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2b3d4378-fe4d-4a46-8a43-c66518db31e0-memberlist\") pod \"speaker-nf7rq\" (UID: \"2b3d4378-fe4d-4a46-8a43-c66518db31e0\") " pod="metallb-system/speaker-nf7rq" Jan 29 16:48:11 crc kubenswrapper[4813]: I0129 16:48:11.629702 4813 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rl2d6" Jan 29 16:48:11 crc kubenswrapper[4813]: I0129 16:48:11.638696 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nf7rq" Jan 29 16:48:11 crc kubenswrapper[4813]: W0129 16:48:11.671830 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3d4378_fe4d_4a46_8a43_c66518db31e0.slice/crio-cd9aefc65c4eeeb9f5da8d389f7aa5406186303f6a298e0ae71ac961a66f5c65 WatchSource:0}: Error finding container cd9aefc65c4eeeb9f5da8d389f7aa5406186303f6a298e0ae71ac961a66f5c65: Status 404 returned error can't find the container with id cd9aefc65c4eeeb9f5da8d389f7aa5406186303f6a298e0ae71ac961a66f5c65 Jan 29 16:48:12 crc kubenswrapper[4813]: I0129 16:48:12.580791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nf7rq" event={"ID":"2b3d4378-fe4d-4a46-8a43-c66518db31e0","Type":"ContainerStarted","Data":"85010014fcaa5e2b7684aa8847aa0283172e2fc173079d13d89ab027f16dc992"} Jan 29 16:48:12 crc kubenswrapper[4813]: I0129 16:48:12.580865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nf7rq" event={"ID":"2b3d4378-fe4d-4a46-8a43-c66518db31e0","Type":"ContainerStarted","Data":"e91ff337f193ad2a4f25765b6ecb8c07eedbf4da23c7d986834a5285804cf737"} Jan 29 16:48:12 crc kubenswrapper[4813]: I0129 16:48:12.580876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nf7rq" event={"ID":"2b3d4378-fe4d-4a46-8a43-c66518db31e0","Type":"ContainerStarted","Data":"cd9aefc65c4eeeb9f5da8d389f7aa5406186303f6a298e0ae71ac961a66f5c65"} Jan 29 16:48:12 crc kubenswrapper[4813]: I0129 16:48:12.581139 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nf7rq" Jan 29 16:48:12 crc kubenswrapper[4813]: I0129 16:48:12.605911 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nf7rq" podStartSLOduration=5.605885285 podStartE2EDuration="5.605885285s" podCreationTimestamp="2026-01-29 16:48:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:48:12.599411108 +0000 UTC m=+1145.086614324" watchObservedRunningTime="2026-01-29 16:48:12.605885285 +0000 UTC m=+1145.093088501" Jan 29 16:48:16 crc kubenswrapper[4813]: I0129 16:48:16.611914 4813 generic.go:334] "Generic (PLEG): container finished" podID="6df3ba9c-ad21-4805-bee6-eaa997d79d87" containerID="353744a12b9bb6b16813d5a53460b9d577ff3c3728217de62019bd87d3cb325f" exitCode=0 Jan 29 16:48:16 crc kubenswrapper[4813]: I0129 16:48:16.612025 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerDied","Data":"353744a12b9bb6b16813d5a53460b9d577ff3c3728217de62019bd87d3cb325f"} Jan 29 16:48:16 crc kubenswrapper[4813]: I0129 16:48:16.615832 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" event={"ID":"e0a80701-75b8-47bd-a5ca-a17d911b3d05","Type":"ContainerStarted","Data":"d1e17dd6c9c3b15db4dbbef3ca5ac16a6468bdda25127c009b9038b3ddafbd61"} Jan 29 16:48:16 crc kubenswrapper[4813]: I0129 16:48:16.616550 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:16 crc kubenswrapper[4813]: I0129 16:48:16.653061 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" podStartSLOduration=1.967266755 podStartE2EDuration="9.65304679s" podCreationTimestamp="2026-01-29 16:48:07 +0000 UTC" firstStartedPulling="2026-01-29 16:48:08.269303897 +0000 UTC m=+1140.756507103" lastFinishedPulling="2026-01-29 16:48:15.955083922 +0000 UTC m=+1148.442287138" observedRunningTime="2026-01-29 16:48:16.650373113 +0000 UTC m=+1149.137576339" watchObservedRunningTime="2026-01-29 16:48:16.65304679 +0000 UTC m=+1149.140250006" Jan 29 16:48:17 crc kubenswrapper[4813]: I0129 16:48:17.623414 4813 generic.go:334] "Generic (PLEG): container finished" podID="6df3ba9c-ad21-4805-bee6-eaa997d79d87" containerID="d7647831586b90bf4604cb1d219d0e4018c8839f77c64cb49f8c978c7f70a116" exitCode=0 Jan 29 16:48:17 crc kubenswrapper[4813]: I0129 16:48:17.623468 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerDied","Data":"d7647831586b90bf4604cb1d219d0e4018c8839f77c64cb49f8c978c7f70a116"} Jan 29 16:48:18 crc kubenswrapper[4813]: I0129 16:48:18.632737 4813 generic.go:334] "Generic (PLEG): container finished" podID="6df3ba9c-ad21-4805-bee6-eaa997d79d87" containerID="80138fed38802344f62b51a34a4a08fbe9ccd804efbfc1594f55dc13577d39df" exitCode=0 Jan 29 16:48:18 crc kubenswrapper[4813]: I0129 16:48:18.632798 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerDied","Data":"80138fed38802344f62b51a34a4a08fbe9ccd804efbfc1594f55dc13577d39df"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643073 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"86a2df034207eaa13ef84ab5aa7692650c6033260b41c103be9fa44f173b2556"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643482 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"a4a94d49ddde7c61cb8ce9881ca30c07fec5df4314c1a8747e65654894df31b6"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643496 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"9f3c5439fe4ae6c5fe044bedbdf4d15362d2175048ddc54bf05d0933badaf0e0"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643506 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"653e4061c8bbe87c3ab20f48eade5a0f9d6f7dae55fc3a8806eb1d6f13d461b6"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"c4586d92697bca171fd26d1a9691458a1d9afbc6feeadc8cbc88664c46cde08d"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643522 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xfmps" event={"ID":"6df3ba9c-ad21-4805-bee6-eaa997d79d87","Type":"ContainerStarted","Data":"16e14ddbf934abee501ccbe462d2c6121656ef8f99aa8c8b36c92088c8c1e128"} Jan 29 16:48:19 crc kubenswrapper[4813]: I0129 16:48:19.643639 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:21 crc kubenswrapper[4813]: I0129 16:48:21.643172 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nf7rq" Jan 29 16:48:21 crc kubenswrapper[4813]: I0129 16:48:21.662919 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xfmps" podStartSLOduration=6.936722865 podStartE2EDuration="14.662899264s" podCreationTimestamp="2026-01-29 16:48:07 +0000 UTC" firstStartedPulling="2026-01-29 16:48:08.213872569 +0000 UTC m=+1140.701075775" lastFinishedPulling="2026-01-29 16:48:15.940048958 +0000 UTC m=+1148.427252174" observedRunningTime="2026-01-29 16:48:19.667774196 +0000 UTC m=+1152.154977412" watchObservedRunningTime="2026-01-29 16:48:21.662899264 +0000 UTC m=+1154.150102480" Jan 29 16:48:22 crc kubenswrapper[4813]: I0129 16:48:22.949021 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.028876 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.033728 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5"] Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.034815 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.036166 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.051473 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5"] Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.076800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.077044 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghbv\" (UniqueName: \"kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.077244 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.177864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.177967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghbv\" (UniqueName: \"kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.178043 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.178497 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.178724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.196502 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghbv\" (UniqueName: \"kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.351529 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.578600 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5"] Jan 29 16:48:23 crc kubenswrapper[4813]: I0129 16:48:23.670585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerStarted","Data":"18dfa4be61c222857141cdddb4b77d4ce131bd8d3ab71f62f305884d498fdcb9"} Jan 29 16:48:24 crc kubenswrapper[4813]: I0129 16:48:24.679133 4813 generic.go:334] "Generic (PLEG): container finished" podID="f435c00e-812d-4034-9652-7535d6f694cd" containerID="30963d04fc2f1d397895876bedf9faa6b2b7292bc90d3dcfe8f7ec793a7bd367" exitCode=0 Jan 29 16:48:24 crc kubenswrapper[4813]: I0129 16:48:24.679196 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerDied","Data":"30963d04fc2f1d397895876bedf9faa6b2b7292bc90d3dcfe8f7ec793a7bd367"} Jan 29 16:48:27 crc kubenswrapper[4813]: I0129 16:48:27.974698 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-sswwk" Jan 29 16:48:28 crc kubenswrapper[4813]: I0129 16:48:28.700661 4813 generic.go:334] "Generic (PLEG): container finished" podID="f435c00e-812d-4034-9652-7535d6f694cd" containerID="5341d85db0d263192b0d0ec10de4cbaa931bad5759dd8597a3d3da30bda00983" exitCode=0 Jan 29 16:48:28 crc kubenswrapper[4813]: I0129 16:48:28.700714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerDied","Data":"5341d85db0d263192b0d0ec10de4cbaa931bad5759dd8597a3d3da30bda00983"} Jan 29 16:48:29 crc kubenswrapper[4813]: I0129 16:48:29.555157 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6nftd" Jan 29 16:48:29 crc kubenswrapper[4813]: I0129 16:48:29.709654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerStarted","Data":"41e8701084250434ea2f17bab2afa071b3ab8e2ad9b1c2eaa1b7fb4adb1eea69"} Jan 29 16:48:29 crc kubenswrapper[4813]: I0129 16:48:29.732708 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" podStartSLOduration=3.700091924 podStartE2EDuration="6.732688986s" podCreationTimestamp="2026-01-29 16:48:23 +0000 UTC" firstStartedPulling="2026-01-29 16:48:24.680688718 +0000 UTC m=+1157.167891934" lastFinishedPulling="2026-01-29 16:48:27.71328578 +0000 UTC m=+1160.200488996" observedRunningTime="2026-01-29 16:48:29.729443683 +0000 UTC m=+1162.216646919" watchObservedRunningTime="2026-01-29 16:48:29.732688986 +0000 UTC m=+1162.219892202" Jan 29 16:48:30 crc kubenswrapper[4813]: I0129 16:48:30.720185 4813 generic.go:334] "Generic (PLEG): container finished" podID="f435c00e-812d-4034-9652-7535d6f694cd" containerID="41e8701084250434ea2f17bab2afa071b3ab8e2ad9b1c2eaa1b7fb4adb1eea69" exitCode=0 Jan 29 16:48:30 crc kubenswrapper[4813]: I0129 16:48:30.720252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerDied","Data":"41e8701084250434ea2f17bab2afa071b3ab8e2ad9b1c2eaa1b7fb4adb1eea69"} Jan 29 16:48:31 crc kubenswrapper[4813]: I0129 16:48:31.970713 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.092778 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghbv\" (UniqueName: \"kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv\") pod \"f435c00e-812d-4034-9652-7535d6f694cd\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.092898 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle\") pod \"f435c00e-812d-4034-9652-7535d6f694cd\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.092957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util\") pod \"f435c00e-812d-4034-9652-7535d6f694cd\" (UID: \"f435c00e-812d-4034-9652-7535d6f694cd\") " Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.094266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle" (OuterVolumeSpecName: "bundle") pod "f435c00e-812d-4034-9652-7535d6f694cd" (UID: "f435c00e-812d-4034-9652-7535d6f694cd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.098550 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv" (OuterVolumeSpecName: "kube-api-access-dghbv") pod "f435c00e-812d-4034-9652-7535d6f694cd" (UID: "f435c00e-812d-4034-9652-7535d6f694cd"). InnerVolumeSpecName "kube-api-access-dghbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.103990 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util" (OuterVolumeSpecName: "util") pod "f435c00e-812d-4034-9652-7535d6f694cd" (UID: "f435c00e-812d-4034-9652-7535d6f694cd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.193851 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.193883 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f435c00e-812d-4034-9652-7535d6f694cd-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.193897 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dghbv\" (UniqueName: \"kubernetes.io/projected/f435c00e-812d-4034-9652-7535d6f694cd-kube-api-access-dghbv\") on node \"crc\" DevicePath \"\"" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.732393 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" event={"ID":"f435c00e-812d-4034-9652-7535d6f694cd","Type":"ContainerDied","Data":"18dfa4be61c222857141cdddb4b77d4ce131bd8d3ab71f62f305884d498fdcb9"} Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.732655 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18dfa4be61c222857141cdddb4b77d4ce131bd8d3ab71f62f305884d498fdcb9" Jan 29 16:48:32 crc kubenswrapper[4813]: I0129 16:48:32.732436 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.504581 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn"] Jan 29 16:48:36 crc kubenswrapper[4813]: E0129 16:48:36.505268 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="util" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.505284 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="util" Jan 29 16:48:36 crc kubenswrapper[4813]: E0129 16:48:36.505297 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="pull" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.505303 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="pull" Jan 29 16:48:36 crc kubenswrapper[4813]: E0129 16:48:36.505312 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="extract" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.505318 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="extract" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.505435 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f435c00e-812d-4034-9652-7535d6f694cd" containerName="extract" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.505838 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.510066 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.511399 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-tdsjg" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.520316 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.549941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85357ae9-ceb5-4325-bdf2-980df74aa349-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.550317 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5h6w\" (UniqueName: \"kubernetes.io/projected/85357ae9-ceb5-4325-bdf2-980df74aa349-kube-api-access-z5h6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.583404 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn"] Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.651802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85357ae9-ceb5-4325-bdf2-980df74aa349-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.651953 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5h6w\" (UniqueName: \"kubernetes.io/projected/85357ae9-ceb5-4325-bdf2-980df74aa349-kube-api-access-z5h6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.652621 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85357ae9-ceb5-4325-bdf2-980df74aa349-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.680526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5h6w\" (UniqueName: \"kubernetes.io/projected/85357ae9-ceb5-4325-bdf2-980df74aa349-kube-api-access-z5h6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-f8xbn\" (UID: \"85357ae9-ceb5-4325-bdf2-980df74aa349\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:36 crc kubenswrapper[4813]: I0129 16:48:36.823012 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" Jan 29 16:48:37 crc kubenswrapper[4813]: I0129 16:48:37.118450 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn"] Jan 29 16:48:37 crc kubenswrapper[4813]: W0129 16:48:37.124103 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85357ae9_ceb5_4325_bdf2_980df74aa349.slice/crio-d5c02075a9eeecaa2e6ebe857cd7c87390683a6d216f0a0117e2189dd917e691 WatchSource:0}: Error finding container d5c02075a9eeecaa2e6ebe857cd7c87390683a6d216f0a0117e2189dd917e691: Status 404 returned error can't find the container with id d5c02075a9eeecaa2e6ebe857cd7c87390683a6d216f0a0117e2189dd917e691 Jan 29 16:48:37 crc kubenswrapper[4813]: I0129 16:48:37.758762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" event={"ID":"85357ae9-ceb5-4325-bdf2-980df74aa349","Type":"ContainerStarted","Data":"d5c02075a9eeecaa2e6ebe857cd7c87390683a6d216f0a0117e2189dd917e691"} Jan 29 16:48:37 crc kubenswrapper[4813]: I0129 16:48:37.952386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xfmps" Jan 29 16:48:41 crc kubenswrapper[4813]: I0129 16:48:41.781193 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" event={"ID":"85357ae9-ceb5-4325-bdf2-980df74aa349","Type":"ContainerStarted","Data":"da85f9684e4fe5b36814565d24e4371096214cd1a8a700db3095c2a91db2f72d"} Jan 29 16:48:41 crc kubenswrapper[4813]: I0129 16:48:41.802808 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-f8xbn" podStartSLOduration=1.683322976 podStartE2EDuration="5.802785635s" podCreationTimestamp="2026-01-29 16:48:36 +0000 UTC" firstStartedPulling="2026-01-29 16:48:37.126701402 +0000 UTC m=+1169.613904618" lastFinishedPulling="2026-01-29 16:48:41.246164061 +0000 UTC m=+1173.733367277" observedRunningTime="2026-01-29 16:48:41.798829681 +0000 UTC m=+1174.286032907" watchObservedRunningTime="2026-01-29 16:48:41.802785635 +0000 UTC m=+1174.289988851" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.456482 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xcvwg"] Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.457695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.459646 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.459703 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hrdcr" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.459768 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.474731 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xcvwg"] Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.595797 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.595853 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gspss\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-kube-api-access-gspss\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.697084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.697146 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gspss\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-kube-api-access-gspss\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.715751 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.716003 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gspss\" (UniqueName: \"kubernetes.io/projected/9f660d36-6472-483e-96b9-10742c0bdc49-kube-api-access-gspss\") pod \"cert-manager-webhook-6888856db4-xcvwg\" (UID: \"9f660d36-6472-483e-96b9-10742c0bdc49\") " pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:45 crc kubenswrapper[4813]: I0129 16:48:45.775167 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:46 crc kubenswrapper[4813]: I0129 16:48:46.054089 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-xcvwg"] Jan 29 16:48:46 crc kubenswrapper[4813]: I0129 16:48:46.822038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" event={"ID":"9f660d36-6472-483e-96b9-10742c0bdc49","Type":"ContainerStarted","Data":"a70320fa20b6a292892cbab9fb55b2efac5db147adf5e7d40d82e9da59e0afc2"} Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.107526 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-t84sv"] Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.108328 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.110384 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8c5bp" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.126752 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-t84sv"] Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.238721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.238817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4444h\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-kube-api-access-4444h\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.340017 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4444h\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-kube-api-access-4444h\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.340164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.364525 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.364607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4444h\" (UniqueName: \"kubernetes.io/projected/e70ce70a-9390-42de-828e-054a8b3e1a4a-kube-api-access-4444h\") pod \"cert-manager-cainjector-5545bd876-t84sv\" (UID: \"e70ce70a-9390-42de-828e-054a8b3e1a4a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.440371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.650510 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-t84sv"] Jan 29 16:48:48 crc kubenswrapper[4813]: W0129 16:48:48.665577 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70ce70a_9390_42de_828e_054a8b3e1a4a.slice/crio-35ab604869393380ff8a679dd277c8afd24f6229a01bdd98c1e7d7d128518719 WatchSource:0}: Error finding container 35ab604869393380ff8a679dd277c8afd24f6229a01bdd98c1e7d7d128518719: Status 404 returned error can't find the container with id 35ab604869393380ff8a679dd277c8afd24f6229a01bdd98c1e7d7d128518719 Jan 29 16:48:48 crc kubenswrapper[4813]: I0129 16:48:48.838354 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" event={"ID":"e70ce70a-9390-42de-828e-054a8b3e1a4a","Type":"ContainerStarted","Data":"35ab604869393380ff8a679dd277c8afd24f6229a01bdd98c1e7d7d128518719"} Jan 29 16:48:50 crc kubenswrapper[4813]: I0129 16:48:50.853476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" event={"ID":"9f660d36-6472-483e-96b9-10742c0bdc49","Type":"ContainerStarted","Data":"73d4d07cca25c27f063a172f650c71e2c11dc44c531b1cea1fec811d72c7252f"} Jan 29 16:48:50 crc kubenswrapper[4813]: I0129 16:48:50.853974 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:48:50 crc kubenswrapper[4813]: I0129 16:48:50.855695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" event={"ID":"e70ce70a-9390-42de-828e-054a8b3e1a4a","Type":"ContainerStarted","Data":"965a7d4cef6a207b42856dd4a4419eb1ca5ced31bbc97634c78a5b6d194940cf"} Jan 29 16:48:50 crc kubenswrapper[4813]: I0129 16:48:50.883247 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" podStartSLOduration=1.285862586 podStartE2EDuration="5.88322585s" podCreationTimestamp="2026-01-29 16:48:45 +0000 UTC" firstStartedPulling="2026-01-29 16:48:46.058739069 +0000 UTC m=+1178.545942285" lastFinishedPulling="2026-01-29 16:48:50.656102333 +0000 UTC m=+1183.143305549" observedRunningTime="2026-01-29 16:48:50.87281238 +0000 UTC m=+1183.360015596" watchObservedRunningTime="2026-01-29 16:48:50.88322585 +0000 UTC m=+1183.370429076" Jan 29 16:48:50 crc kubenswrapper[4813]: I0129 16:48:50.897814 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-t84sv" podStartSLOduration=0.924177562 podStartE2EDuration="2.897778499s" podCreationTimestamp="2026-01-29 16:48:48 +0000 UTC" firstStartedPulling="2026-01-29 16:48:48.683453244 +0000 UTC m=+1181.170656460" lastFinishedPulling="2026-01-29 16:48:50.657054181 +0000 UTC m=+1183.144257397" observedRunningTime="2026-01-29 16:48:50.894469134 +0000 UTC m=+1183.381672360" watchObservedRunningTime="2026-01-29 16:48:50.897778499 +0000 UTC m=+1183.384981715" Jan 29 16:48:55 crc kubenswrapper[4813]: I0129 16:48:55.778701 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-xcvwg" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.405218 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-h8ndz"] Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.406727 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.409314 4813 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-djvlg" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.418766 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-h8ndz"] Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.572241 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqj4\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-kube-api-access-lkqj4\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.572290 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-bound-sa-token\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.673714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqj4\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-kube-api-access-lkqj4\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.673797 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-bound-sa-token\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.700248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-bound-sa-token\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.706811 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqj4\" (UniqueName: \"kubernetes.io/projected/220240d2-4982-4884-80eb-09b077e332a1-kube-api-access-lkqj4\") pod \"cert-manager-545d4d4674-h8ndz\" (UID: \"220240d2-4982-4884-80eb-09b077e332a1\") " pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.735876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-h8ndz" Jan 29 16:49:03 crc kubenswrapper[4813]: I0129 16:49:03.961761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-h8ndz"] Jan 29 16:49:04 crc kubenswrapper[4813]: I0129 16:49:04.950149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h8ndz" event={"ID":"220240d2-4982-4884-80eb-09b077e332a1","Type":"ContainerStarted","Data":"f184d025a4e1906806b96ee93fd5b719824f5d60bcbcf8baa54ffe70d539920a"} Jan 29 16:49:04 crc kubenswrapper[4813]: I0129 16:49:04.950663 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h8ndz" event={"ID":"220240d2-4982-4884-80eb-09b077e332a1","Type":"ContainerStarted","Data":"87234af223ce209c9812f0374c16350cc154d8bc805ce607d2025ae0f87bc5ba"} Jan 29 16:49:04 crc kubenswrapper[4813]: I0129 16:49:04.973129 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-h8ndz" podStartSLOduration=1.9730833140000001 podStartE2EDuration="1.973083314s" podCreationTimestamp="2026-01-29 16:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:49:04.96809128 +0000 UTC m=+1197.455294506" watchObservedRunningTime="2026-01-29 16:49:04.973083314 +0000 UTC m=+1197.460286540" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.394422 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.395977 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.405287 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.405295 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8lp6b" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.405411 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.420162 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.468250 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kssv\" (UniqueName: \"kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv\") pod \"openstack-operator-index-kgprb\" (UID: \"71669bd9-c5b9-420f-9a2a-e21723cc50f2\") " pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.569757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kssv\" (UniqueName: \"kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv\") pod \"openstack-operator-index-kgprb\" (UID: \"71669bd9-c5b9-420f-9a2a-e21723cc50f2\") " pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.588254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kssv\" (UniqueName: \"kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv\") pod \"openstack-operator-index-kgprb\" (UID: \"71669bd9-c5b9-420f-9a2a-e21723cc50f2\") " pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.715486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:09 crc kubenswrapper[4813]: I0129 16:49:09.902383 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:10 crc kubenswrapper[4813]: I0129 16:49:10.002027 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kgprb" event={"ID":"71669bd9-c5b9-420f-9a2a-e21723cc50f2","Type":"ContainerStarted","Data":"047740fe4c9e7a84a76145bebeca800d9fc22229ecd76d60d4e1a0dd281d9354"} Jan 29 16:49:11 crc kubenswrapper[4813]: I0129 16:49:11.010233 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kgprb" event={"ID":"71669bd9-c5b9-420f-9a2a-e21723cc50f2","Type":"ContainerStarted","Data":"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f"} Jan 29 16:49:11 crc kubenswrapper[4813]: I0129 16:49:11.035750 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kgprb" podStartSLOduration=1.221742111 podStartE2EDuration="2.035720304s" podCreationTimestamp="2026-01-29 16:49:09 +0000 UTC" firstStartedPulling="2026-01-29 16:49:09.911610092 +0000 UTC m=+1202.398813308" lastFinishedPulling="2026-01-29 16:49:10.725588295 +0000 UTC m=+1203.212791501" observedRunningTime="2026-01-29 16:49:11.031742249 +0000 UTC m=+1203.518945465" watchObservedRunningTime="2026-01-29 16:49:11.035720304 +0000 UTC m=+1203.522923530" Jan 29 16:49:11 crc kubenswrapper[4813]: I0129 16:49:11.961307 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.378637 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-b9hzj"] Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.379861 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.392761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b9hzj"] Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.414827 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7b7\" (UniqueName: \"kubernetes.io/projected/ee67a921-286b-45f7-b579-0c7463587f13-kube-api-access-lz7b7\") pod \"openstack-operator-index-b9hzj\" (UID: \"ee67a921-286b-45f7-b579-0c7463587f13\") " pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.516948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7b7\" (UniqueName: \"kubernetes.io/projected/ee67a921-286b-45f7-b579-0c7463587f13-kube-api-access-lz7b7\") pod \"openstack-operator-index-b9hzj\" (UID: \"ee67a921-286b-45f7-b579-0c7463587f13\") " pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.538084 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7b7\" (UniqueName: \"kubernetes.io/projected/ee67a921-286b-45f7-b579-0c7463587f13-kube-api-access-lz7b7\") pod \"openstack-operator-index-b9hzj\" (UID: \"ee67a921-286b-45f7-b579-0c7463587f13\") " pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.719565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:12 crc kubenswrapper[4813]: I0129 16:49:12.987217 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-b9hzj"] Jan 29 16:49:12 crc kubenswrapper[4813]: W0129 16:49:12.993009 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee67a921_286b_45f7_b579_0c7463587f13.slice/crio-652502b4babdc5eb03cb9e6b4d05126343f81639182125978d0c866bdf8bc504 WatchSource:0}: Error finding container 652502b4babdc5eb03cb9e6b4d05126343f81639182125978d0c866bdf8bc504: Status 404 returned error can't find the container with id 652502b4babdc5eb03cb9e6b4d05126343f81639182125978d0c866bdf8bc504 Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.021137 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b9hzj" event={"ID":"ee67a921-286b-45f7-b579-0c7463587f13","Type":"ContainerStarted","Data":"652502b4babdc5eb03cb9e6b4d05126343f81639182125978d0c866bdf8bc504"} Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.021303 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kgprb" podUID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" containerName="registry-server" containerID="cri-o://ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f" gracePeriod=2 Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.336356 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.429609 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kssv\" (UniqueName: \"kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv\") pod \"71669bd9-c5b9-420f-9a2a-e21723cc50f2\" (UID: \"71669bd9-c5b9-420f-9a2a-e21723cc50f2\") " Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.435323 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv" (OuterVolumeSpecName: "kube-api-access-5kssv") pod "71669bd9-c5b9-420f-9a2a-e21723cc50f2" (UID: "71669bd9-c5b9-420f-9a2a-e21723cc50f2"). InnerVolumeSpecName "kube-api-access-5kssv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:13 crc kubenswrapper[4813]: I0129 16:49:13.531502 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kssv\" (UniqueName: \"kubernetes.io/projected/71669bd9-c5b9-420f-9a2a-e21723cc50f2-kube-api-access-5kssv\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.031806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-b9hzj" event={"ID":"ee67a921-286b-45f7-b579-0c7463587f13","Type":"ContainerStarted","Data":"f6093158ba04363d9cb9fc82246a72211b713a32a69310b9555f361c39aea45d"} Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.034729 4813 generic.go:334] "Generic (PLEG): container finished" podID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" containerID="ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f" exitCode=0 Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.034901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kgprb" event={"ID":"71669bd9-c5b9-420f-9a2a-e21723cc50f2","Type":"ContainerDied","Data":"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f"} Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.035004 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kgprb" event={"ID":"71669bd9-c5b9-420f-9a2a-e21723cc50f2","Type":"ContainerDied","Data":"047740fe4c9e7a84a76145bebeca800d9fc22229ecd76d60d4e1a0dd281d9354"} Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.035100 4813 scope.go:117] "RemoveContainer" containerID="ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.035396 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kgprb" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.068629 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-b9hzj" podStartSLOduration=1.6421313899999999 podStartE2EDuration="2.068600983s" podCreationTimestamp="2026-01-29 16:49:12 +0000 UTC" firstStartedPulling="2026-01-29 16:49:12.997425507 +0000 UTC m=+1205.484628733" lastFinishedPulling="2026-01-29 16:49:13.42389511 +0000 UTC m=+1205.911098326" observedRunningTime="2026-01-29 16:49:14.056784082 +0000 UTC m=+1206.543987318" watchObservedRunningTime="2026-01-29 16:49:14.068600983 +0000 UTC m=+1206.555804189" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.073151 4813 scope.go:117] "RemoveContainer" containerID="ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f" Jan 29 16:49:14 crc kubenswrapper[4813]: E0129 16:49:14.073944 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f\": container with ID starting with ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f not found: ID does not exist" containerID="ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.074073 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f"} err="failed to get container status \"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f\": rpc error: code = NotFound desc = could not find container \"ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f\": container with ID starting with ff877da1c8a19eb43e053e3a8d5dac39949bbe1faa515765e6b3afdc3fa7eb9f not found: ID does not exist" Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.081805 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.088584 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kgprb"] Jan 29 16:49:14 crc kubenswrapper[4813]: I0129 16:49:14.248422 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" path="/var/lib/kubelet/pods/71669bd9-c5b9-420f-9a2a-e21723cc50f2/volumes" Jan 29 16:49:22 crc kubenswrapper[4813]: I0129 16:49:22.720399 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:22 crc kubenswrapper[4813]: I0129 16:49:22.721024 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:22 crc kubenswrapper[4813]: I0129 16:49:22.751038 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:23 crc kubenswrapper[4813]: I0129 16:49:23.139905 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-b9hzj" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.709064 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5"] Jan 29 16:49:36 crc kubenswrapper[4813]: E0129 16:49:36.709977 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" containerName="registry-server" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.710003 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" containerName="registry-server" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.710261 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="71669bd9-c5b9-420f-9a2a-e21723cc50f2" containerName="registry-server" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.713505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.718785 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xnnrt" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.719041 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5"] Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.763194 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.763274 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bkl5\" (UniqueName: \"kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.763444 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.864492 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.864583 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.864642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bkl5\" (UniqueName: \"kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.865245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.865322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:36 crc kubenswrapper[4813]: I0129 16:49:36.886722 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bkl5\" (UniqueName: \"kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:37 crc kubenswrapper[4813]: I0129 16:49:37.036080 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:37 crc kubenswrapper[4813]: I0129 16:49:37.295264 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5"] Jan 29 16:49:38 crc kubenswrapper[4813]: I0129 16:49:38.210381 4813 generic.go:334] "Generic (PLEG): container finished" podID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerID="9c37630ce18122488d2172f6b1f90a794dcfd50207f0ce670b7f1c7fbfb36dcd" exitCode=0 Jan 29 16:49:38 crc kubenswrapper[4813]: I0129 16:49:38.210497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" event={"ID":"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106","Type":"ContainerDied","Data":"9c37630ce18122488d2172f6b1f90a794dcfd50207f0ce670b7f1c7fbfb36dcd"} Jan 29 16:49:38 crc kubenswrapper[4813]: I0129 16:49:38.210716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" event={"ID":"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106","Type":"ContainerStarted","Data":"84a18da41be78f66b8d409af27c4f3ddc34b6b06dcf2fcfaba59cffc6f630f58"} Jan 29 16:49:40 crc kubenswrapper[4813]: I0129 16:49:40.230725 4813 generic.go:334] "Generic (PLEG): container finished" podID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerID="7ce3e8adb6655de8dfb8159fc6abea71c766e94db50160c9578f09088fdeed6e" exitCode=0 Jan 29 16:49:40 crc kubenswrapper[4813]: I0129 16:49:40.231059 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" event={"ID":"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106","Type":"ContainerDied","Data":"7ce3e8adb6655de8dfb8159fc6abea71c766e94db50160c9578f09088fdeed6e"} Jan 29 16:49:41 crc kubenswrapper[4813]: I0129 16:49:41.242919 4813 generic.go:334] "Generic (PLEG): container finished" podID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerID="8cf78e38fc20f1c3f5b24e7f131a487f1cebeb8b6a78ff02a56796f80b9c5444" exitCode=0 Jan 29 16:49:41 crc kubenswrapper[4813]: I0129 16:49:41.242984 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" event={"ID":"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106","Type":"ContainerDied","Data":"8cf78e38fc20f1c3f5b24e7f131a487f1cebeb8b6a78ff02a56796f80b9c5444"} Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.518299 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.685055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bkl5\" (UniqueName: \"kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5\") pod \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.685140 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle\") pod \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.685169 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util\") pod \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\" (UID: \"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106\") " Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.686218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle" (OuterVolumeSpecName: "bundle") pod "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" (UID: "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.695377 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5" (OuterVolumeSpecName: "kube-api-access-4bkl5") pod "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" (UID: "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106"). InnerVolumeSpecName "kube-api-access-4bkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.699435 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util" (OuterVolumeSpecName: "util") pod "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" (UID: "25bf8378-ee1c-4a1e-8a87-2c3efd6ae106"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.786362 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bkl5\" (UniqueName: \"kubernetes.io/projected/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-kube-api-access-4bkl5\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.786410 4813 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:42 crc kubenswrapper[4813]: I0129 16:49:42.786425 4813 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/25bf8378-ee1c-4a1e-8a87-2c3efd6ae106-util\") on node \"crc\" DevicePath \"\"" Jan 29 16:49:43 crc kubenswrapper[4813]: I0129 16:49:43.261473 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" event={"ID":"25bf8378-ee1c-4a1e-8a87-2c3efd6ae106","Type":"ContainerDied","Data":"84a18da41be78f66b8d409af27c4f3ddc34b6b06dcf2fcfaba59cffc6f630f58"} Jan 29 16:49:43 crc kubenswrapper[4813]: I0129 16:49:43.261514 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a18da41be78f66b8d409af27c4f3ddc34b6b06dcf2fcfaba59cffc6f630f58" Jan 29 16:49:43 crc kubenswrapper[4813]: I0129 16:49:43.261585 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.899069 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs"] Jan 29 16:49:48 crc kubenswrapper[4813]: E0129 16:49:48.899743 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="pull" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.899762 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="pull" Jan 29 16:49:48 crc kubenswrapper[4813]: E0129 16:49:48.899779 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="util" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.899788 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="util" Jan 29 16:49:48 crc kubenswrapper[4813]: E0129 16:49:48.899807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="extract" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.899816 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="extract" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.899968 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bf8378-ee1c-4a1e-8a87-2c3efd6ae106" containerName="extract" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.900599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.906628 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mswbg" Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.940524 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs"] Jan 29 16:49:48 crc kubenswrapper[4813]: I0129 16:49:48.967698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsglm\" (UniqueName: \"kubernetes.io/projected/8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58-kube-api-access-zsglm\") pod \"openstack-operator-controller-init-757f46c65d-r6jhs\" (UID: \"8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:49 crc kubenswrapper[4813]: I0129 16:49:49.068883 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsglm\" (UniqueName: \"kubernetes.io/projected/8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58-kube-api-access-zsglm\") pod \"openstack-operator-controller-init-757f46c65d-r6jhs\" (UID: \"8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:49 crc kubenswrapper[4813]: I0129 16:49:49.121187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsglm\" (UniqueName: \"kubernetes.io/projected/8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58-kube-api-access-zsglm\") pod \"openstack-operator-controller-init-757f46c65d-r6jhs\" (UID: \"8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:49 crc kubenswrapper[4813]: I0129 16:49:49.226283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:49 crc kubenswrapper[4813]: I0129 16:49:49.462499 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs"] Jan 29 16:49:50 crc kubenswrapper[4813]: I0129 16:49:50.309880 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" event={"ID":"8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58","Type":"ContainerStarted","Data":"027367d105fbff483a8520c5738c97c887e0ee85cb950dfeedef9286e08459f0"} Jan 29 16:49:55 crc kubenswrapper[4813]: I0129 16:49:55.356545 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" event={"ID":"8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58","Type":"ContainerStarted","Data":"ef00d7e621b2159448f57f7a471fe14066860a77bc2a10e7821887013f5de351"} Jan 29 16:49:55 crc kubenswrapper[4813]: I0129 16:49:55.357461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:49:55 crc kubenswrapper[4813]: I0129 16:49:55.395006 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" podStartSLOduration=2.451009549 podStartE2EDuration="7.394973639s" podCreationTimestamp="2026-01-29 16:49:48 +0000 UTC" firstStartedPulling="2026-01-29 16:49:49.482471253 +0000 UTC m=+1241.969674469" lastFinishedPulling="2026-01-29 16:49:54.426435343 +0000 UTC m=+1246.913638559" observedRunningTime="2026-01-29 16:49:55.387192874 +0000 UTC m=+1247.874396110" watchObservedRunningTime="2026-01-29 16:49:55.394973639 +0000 UTC m=+1247.882176855" Jan 29 16:49:59 crc kubenswrapper[4813]: I0129 16:49:59.230196 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-r6jhs" Jan 29 16:50:00 crc kubenswrapper[4813]: I0129 16:50:00.241761 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:50:00 crc kubenswrapper[4813]: I0129 16:50:00.241806 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:50:08 crc kubenswrapper[4813]: I0129 16:50:08.569535 4813 scope.go:117] "RemoveContainer" containerID="c19120fdbaab156ddcdf1de3ff3f3434983c99cf541e909e4fc87fa49892e334" Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.239921 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.240548 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.992208 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4"] Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.993069 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.995786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-pdvzt" Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.998057 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd"] Jan 29 16:50:30 crc kubenswrapper[4813]: I0129 16:50:30.999070 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.002828 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zx4sw" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.005344 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.010242 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.011320 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.015001 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pz8tr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.015501 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.020481 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.045327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqck\" (UniqueName: \"kubernetes.io/projected/73e44ad8-56ed-43ed-8eed-d466ad56e480-kube-api-access-vkqck\") pod \"designate-operator-controller-manager-6d9697b7f4-fq8n9\" (UID: \"73e44ad8-56ed-43ed-8eed-d466ad56e480\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.045389 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvxl\" (UniqueName: \"kubernetes.io/projected/a5517fc3-4c76-4ecc-bfca-240cbb5877af-kube-api-access-dsvxl\") pod \"cinder-operator-controller-manager-8d874c8fc-g5vs4\" (UID: \"a5517fc3-4c76-4ecc-bfca-240cbb5877af\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.045454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qdw\" (UniqueName: \"kubernetes.io/projected/d7c7a81c-3f15-493f-b7cd-97486af5c4a8-kube-api-access-s5qdw\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-99psd\" (UID: \"d7c7a81c-3f15-493f-b7cd-97486af5c4a8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.048141 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.049138 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.066625 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6ttnm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.072741 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.074164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.087510 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pxfr2" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.105381 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.119158 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.128716 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wmz96"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.129670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.133452 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-x6crk" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.133460 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.146908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvxl\" (UniqueName: \"kubernetes.io/projected/a5517fc3-4c76-4ecc-bfca-240cbb5877af-kube-api-access-dsvxl\") pod \"cinder-operator-controller-manager-8d874c8fc-g5vs4\" (UID: \"a5517fc3-4c76-4ecc-bfca-240cbb5877af\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.146974 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v8kq\" (UniqueName: \"kubernetes.io/projected/b7614801-3c1c-43c9-b270-8121ef10bb6f-kube-api-access-8v8kq\") pod \"glance-operator-controller-manager-8886f4c47-txhlr\" (UID: \"b7614801-3c1c-43c9-b270-8121ef10bb6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.147000 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qdw\" (UniqueName: \"kubernetes.io/projected/d7c7a81c-3f15-493f-b7cd-97486af5c4a8-kube-api-access-s5qdw\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-99psd\" (UID: \"d7c7a81c-3f15-493f-b7cd-97486af5c4a8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.147040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwjj\" (UniqueName: \"kubernetes.io/projected/239ca114-1b8f-447f-968e-83c058fb678e-kube-api-access-xrwjj\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.147068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zglr\" (UniqueName: \"kubernetes.io/projected/e5d8b964-7415-4684-85a2-2eb6ba75acb6-kube-api-access-4zglr\") pod \"heat-operator-controller-manager-69d6db494d-qrrsx\" (UID: \"e5d8b964-7415-4684-85a2-2eb6ba75acb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.147086 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.147155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqck\" (UniqueName: \"kubernetes.io/projected/73e44ad8-56ed-43ed-8eed-d466ad56e480-kube-api-access-vkqck\") pod \"designate-operator-controller-manager-6d9697b7f4-fq8n9\" (UID: \"73e44ad8-56ed-43ed-8eed-d466ad56e480\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.158519 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.159586 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.161500 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-j8nr9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.170274 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.171407 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.173836 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lnnx2" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.174760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqck\" (UniqueName: \"kubernetes.io/projected/73e44ad8-56ed-43ed-8eed-d466ad56e480-kube-api-access-vkqck\") pod \"designate-operator-controller-manager-6d9697b7f4-fq8n9\" (UID: \"73e44ad8-56ed-43ed-8eed-d466ad56e480\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.179413 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wmz96"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.188831 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.189860 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qdw\" (UniqueName: \"kubernetes.io/projected/d7c7a81c-3f15-493f-b7cd-97486af5c4a8-kube-api-access-s5qdw\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-99psd\" (UID: \"d7c7a81c-3f15-493f-b7cd-97486af5c4a8\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.201244 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvxl\" (UniqueName: \"kubernetes.io/projected/a5517fc3-4c76-4ecc-bfca-240cbb5877af-kube-api-access-dsvxl\") pod \"cinder-operator-controller-manager-8d874c8fc-g5vs4\" (UID: \"a5517fc3-4c76-4ecc-bfca-240cbb5877af\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.208271 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.246617 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255368 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v8kq\" (UniqueName: \"kubernetes.io/projected/b7614801-3c1c-43c9-b270-8121ef10bb6f-kube-api-access-8v8kq\") pod \"glance-operator-controller-manager-8886f4c47-txhlr\" (UID: \"b7614801-3c1c-43c9-b270-8121ef10bb6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255467 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwjj\" (UniqueName: \"kubernetes.io/projected/239ca114-1b8f-447f-968e-83c058fb678e-kube-api-access-xrwjj\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255503 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zglr\" (UniqueName: \"kubernetes.io/projected/e5d8b964-7415-4684-85a2-2eb6ba75acb6-kube-api-access-4zglr\") pod \"heat-operator-controller-manager-69d6db494d-qrrsx\" (UID: \"e5d8b964-7415-4684-85a2-2eb6ba75acb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5kx\" (UniqueName: \"kubernetes.io/projected/74c64061-8f84-45f4-813c-028126b44630-kube-api-access-9c5kx\") pod \"horizon-operator-controller-manager-5fb775575f-wmqxl\" (UID: \"74c64061-8f84-45f4-813c-028126b44630\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.255634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tlvf\" (UniqueName: \"kubernetes.io/projected/8fb27cac-ef23-4b14-bee3-7b69233d9cdc-kube-api-access-8tlvf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gttcx\" (UID: \"8fb27cac-ef23-4b14-bee3-7b69233d9cdc\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.256643 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.256733 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:50:31.756714251 +0000 UTC m=+1284.243917467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.257297 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.266905 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-z5dg9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.272823 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.291675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.298509 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zglr\" (UniqueName: \"kubernetes.io/projected/e5d8b964-7415-4684-85a2-2eb6ba75acb6-kube-api-access-4zglr\") pod \"heat-operator-controller-manager-69d6db494d-qrrsx\" (UID: \"e5d8b964-7415-4684-85a2-2eb6ba75acb6\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.302151 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-k2hrw" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.308281 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.308765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwjj\" (UniqueName: \"kubernetes.io/projected/239ca114-1b8f-447f-968e-83c058fb678e-kube-api-access-xrwjj\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.313727 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v8kq\" (UniqueName: \"kubernetes.io/projected/b7614801-3c1c-43c9-b270-8121ef10bb6f-kube-api-access-8v8kq\") pod \"glance-operator-controller-manager-8886f4c47-txhlr\" (UID: \"b7614801-3c1c-43c9-b270-8121ef10bb6f\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.329190 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.335813 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.352957 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.357052 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt2h\" (UniqueName: \"kubernetes.io/projected/f6c9bd68-507b-4ecc-a1cd-e88ab1e96727-kube-api-access-2wt2h\") pod \"keystone-operator-controller-manager-84f48565d4-gv495\" (UID: \"f6c9bd68-507b-4ecc-a1cd-e88ab1e96727\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.357153 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tlvf\" (UniqueName: \"kubernetes.io/projected/8fb27cac-ef23-4b14-bee3-7b69233d9cdc-kube-api-access-8tlvf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gttcx\" (UID: \"8fb27cac-ef23-4b14-bee3-7b69233d9cdc\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.357186 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpx5\" (UniqueName: \"kubernetes.io/projected/b24c6187-bc48-4261-b142-2269f470e58a-kube-api-access-4gpx5\") pod \"manila-operator-controller-manager-7dd968899f-94b2b\" (UID: \"b24c6187-bc48-4261-b142-2269f470e58a\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.357263 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c5kx\" (UniqueName: \"kubernetes.io/projected/74c64061-8f84-45f4-813c-028126b44630-kube-api-access-9c5kx\") pod \"horizon-operator-controller-manager-5fb775575f-wmqxl\" (UID: \"74c64061-8f84-45f4-813c-028126b44630\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.390641 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c5kx\" (UniqueName: \"kubernetes.io/projected/74c64061-8f84-45f4-813c-028126b44630-kube-api-access-9c5kx\") pod \"horizon-operator-controller-manager-5fb775575f-wmqxl\" (UID: \"74c64061-8f84-45f4-813c-028126b44630\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.392077 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.392492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tlvf\" (UniqueName: \"kubernetes.io/projected/8fb27cac-ef23-4b14-bee3-7b69233d9cdc-kube-api-access-8tlvf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-gttcx\" (UID: \"8fb27cac-ef23-4b14-bee3-7b69233d9cdc\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.404675 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.408912 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.409873 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.412225 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-c9ns6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.414323 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.415326 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.420504 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.421396 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.425505 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-86k5s" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.430860 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.441314 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.442342 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.445428 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k5vz4" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.454069 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.454913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.457728 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bthsg" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.459965 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wt2h\" (UniqueName: \"kubernetes.io/projected/f6c9bd68-507b-4ecc-a1cd-e88ab1e96727-kube-api-access-2wt2h\") pod \"keystone-operator-controller-manager-84f48565d4-gv495\" (UID: \"f6c9bd68-507b-4ecc-a1cd-e88ab1e96727\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.460037 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfgl\" (UniqueName: \"kubernetes.io/projected/79ec4543-cad4-4120-addd-0aeb8756eaaf-kube-api-access-xsfgl\") pod \"octavia-operator-controller-manager-6687f8d877-dk624\" (UID: \"79ec4543-cad4-4120-addd-0aeb8756eaaf\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.460074 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4xn6\" (UniqueName: \"kubernetes.io/projected/53f25c98-8891-4e13-afaa-4cbae4bf1c7e-kube-api-access-p4xn6\") pod \"neutron-operator-controller-manager-585dbc889-jssxt\" (UID: \"53f25c98-8891-4e13-afaa-4cbae4bf1c7e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.460093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpx5\" (UniqueName: \"kubernetes.io/projected/b24c6187-bc48-4261-b142-2269f470e58a-kube-api-access-4gpx5\") pod \"manila-operator-controller-manager-7dd968899f-94b2b\" (UID: \"b24c6187-bc48-4261-b142-2269f470e58a\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.472692 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.485156 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.492904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wt2h\" (UniqueName: \"kubernetes.io/projected/f6c9bd68-507b-4ecc-a1cd-e88ab1e96727-kube-api-access-2wt2h\") pod \"keystone-operator-controller-manager-84f48565d4-gv495\" (UID: \"f6c9bd68-507b-4ecc-a1cd-e88ab1e96727\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.507622 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.508568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.513875 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.515156 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-rdbzz" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.524157 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.525312 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.533507 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qksq2" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.533681 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.535379 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpx5\" (UniqueName: \"kubernetes.io/projected/b24c6187-bc48-4261-b142-2269f470e58a-kube-api-access-4gpx5\") pod \"manila-operator-controller-manager-7dd968899f-94b2b\" (UID: \"b24c6187-bc48-4261-b142-2269f470e58a\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.559912 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.561065 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.563571 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ztbtb" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567534 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjwj\" (UniqueName: \"kubernetes.io/projected/30cb06d1-8e0a-4702-bd9c-42561afd684c-kube-api-access-gxjwj\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84pt\" (UniqueName: \"kubernetes.io/projected/6ca5d465-220c-4c51-a5be-c304ddec9e48-kube-api-access-d84pt\") pod \"placement-operator-controller-manager-5b964cf4cd-8k6nd\" (UID: \"6ca5d465-220c-4c51-a5be-c304ddec9e48\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567631 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfgl\" (UniqueName: \"kubernetes.io/projected/79ec4543-cad4-4120-addd-0aeb8756eaaf-kube-api-access-xsfgl\") pod \"octavia-operator-controller-manager-6687f8d877-dk624\" (UID: \"79ec4543-cad4-4120-addd-0aeb8756eaaf\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zh9\" (UniqueName: \"kubernetes.io/projected/4fa45b77-7f33-448e-9855-5f9f5117bf82-kube-api-access-57zh9\") pod \"mariadb-operator-controller-manager-67bf948998-h6hbr\" (UID: \"4fa45b77-7f33-448e-9855-5f9f5117bf82\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt6m8\" (UniqueName: \"kubernetes.io/projected/d81220d4-722d-4fc5-9626-826d1eccc841-kube-api-access-xt6m8\") pod \"ovn-operator-controller-manager-788c46999f-mhdmm\" (UID: \"d81220d4-722d-4fc5-9626-826d1eccc841\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4xn6\" (UniqueName: \"kubernetes.io/projected/53f25c98-8891-4e13-afaa-4cbae4bf1c7e-kube-api-access-p4xn6\") pod \"neutron-operator-controller-manager-585dbc889-jssxt\" (UID: \"53f25c98-8891-4e13-afaa-4cbae4bf1c7e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567712 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.567733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2ct\" (UniqueName: \"kubernetes.io/projected/b67ed668-3ff7-416c-8014-1a5f9668b54c-kube-api-access-lx2ct\") pod \"nova-operator-controller-manager-55bff696bd-5mtzp\" (UID: \"b67ed668-3ff7-416c-8014-1a5f9668b54c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.582929 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.583870 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.588246 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rrdd2" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.588903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4xn6\" (UniqueName: \"kubernetes.io/projected/53f25c98-8891-4e13-afaa-4cbae4bf1c7e-kube-api-access-p4xn6\") pod \"neutron-operator-controller-manager-585dbc889-jssxt\" (UID: \"53f25c98-8891-4e13-afaa-4cbae4bf1c7e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.589883 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfgl\" (UniqueName: \"kubernetes.io/projected/79ec4543-cad4-4120-addd-0aeb8756eaaf-kube-api-access-xsfgl\") pod \"octavia-operator-controller-manager-6687f8d877-dk624\" (UID: \"79ec4543-cad4-4120-addd-0aeb8756eaaf\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.590433 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.598349 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.598644 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.608487 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.615866 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.616777 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.620297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.623149 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-s7xq2" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.624005 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.655723 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.657002 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.663493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.664252 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-djxtn" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2ct\" (UniqueName: \"kubernetes.io/projected/b67ed668-3ff7-416c-8014-1a5f9668b54c-kube-api-access-lx2ct\") pod \"nova-operator-controller-manager-55bff696bd-5mtzp\" (UID: \"b67ed668-3ff7-416c-8014-1a5f9668b54c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsm9\" (UniqueName: \"kubernetes.io/projected/123e5d6b-93c9-452c-9c77-81333f65487d-kube-api-access-nmsm9\") pod \"telemetry-operator-controller-manager-64b5b76f97-zhpjf\" (UID: \"123e5d6b-93c9-452c-9c77-81333f65487d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669634 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwww5\" (UniqueName: \"kubernetes.io/projected/c709a691-8375-4f1f-8552-12e96c2da0a8-kube-api-access-dwww5\") pod \"test-operator-controller-manager-56f8bfcd9f-p9kz6\" (UID: \"c709a691-8375-4f1f-8552-12e96c2da0a8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlp84\" (UniqueName: \"kubernetes.io/projected/c5267884-745c-47aa-a389-87da2899b706-kube-api-access-wlp84\") pod \"swift-operator-controller-manager-68fc8c869-rhsqf\" (UID: \"c5267884-745c-47aa-a389-87da2899b706\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjwj\" (UniqueName: \"kubernetes.io/projected/30cb06d1-8e0a-4702-bd9c-42561afd684c-kube-api-access-gxjwj\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84pt\" (UniqueName: \"kubernetes.io/projected/6ca5d465-220c-4c51-a5be-c304ddec9e48-kube-api-access-d84pt\") pod \"placement-operator-controller-manager-5b964cf4cd-8k6nd\" (UID: \"6ca5d465-220c-4c51-a5be-c304ddec9e48\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zh9\" (UniqueName: \"kubernetes.io/projected/4fa45b77-7f33-448e-9855-5f9f5117bf82-kube-api-access-57zh9\") pod \"mariadb-operator-controller-manager-67bf948998-h6hbr\" (UID: \"4fa45b77-7f33-448e-9855-5f9f5117bf82\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.669876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt6m8\" (UniqueName: \"kubernetes.io/projected/d81220d4-722d-4fc5-9626-826d1eccc841-kube-api-access-xt6m8\") pod \"ovn-operator-controller-manager-788c46999f-mhdmm\" (UID: \"d81220d4-722d-4fc5-9626-826d1eccc841\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.670356 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.670445 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:50:32.170427276 +0000 UTC m=+1284.657630492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.688786 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.694588 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.702658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2ct\" (UniqueName: \"kubernetes.io/projected/b67ed668-3ff7-416c-8014-1a5f9668b54c-kube-api-access-lx2ct\") pod \"nova-operator-controller-manager-55bff696bd-5mtzp\" (UID: \"b67ed668-3ff7-416c-8014-1a5f9668b54c\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.702691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zh9\" (UniqueName: \"kubernetes.io/projected/4fa45b77-7f33-448e-9855-5f9f5117bf82-kube-api-access-57zh9\") pod \"mariadb-operator-controller-manager-67bf948998-h6hbr\" (UID: \"4fa45b77-7f33-448e-9855-5f9f5117bf82\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.703356 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjwj\" (UniqueName: \"kubernetes.io/projected/30cb06d1-8e0a-4702-bd9c-42561afd684c-kube-api-access-gxjwj\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.707804 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84pt\" (UniqueName: \"kubernetes.io/projected/6ca5d465-220c-4c51-a5be-c304ddec9e48-kube-api-access-d84pt\") pod \"placement-operator-controller-manager-5b964cf4cd-8k6nd\" (UID: \"6ca5d465-220c-4c51-a5be-c304ddec9e48\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.709680 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.718973 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wsgqh"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.719764 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.722167 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mm5j6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.742879 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt6m8\" (UniqueName: \"kubernetes.io/projected/d81220d4-722d-4fc5-9626-826d1eccc841-kube-api-access-xt6m8\") pod \"ovn-operator-controller-manager-788c46999f-mhdmm\" (UID: \"d81220d4-722d-4fc5-9626-826d1eccc841\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.743295 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.749949 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.771839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.772025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsm9\" (UniqueName: \"kubernetes.io/projected/123e5d6b-93c9-452c-9c77-81333f65487d-kube-api-access-nmsm9\") pod \"telemetry-operator-controller-manager-64b5b76f97-zhpjf\" (UID: \"123e5d6b-93c9-452c-9c77-81333f65487d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.772073 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwww5\" (UniqueName: \"kubernetes.io/projected/c709a691-8375-4f1f-8552-12e96c2da0a8-kube-api-access-dwww5\") pod \"test-operator-controller-manager-56f8bfcd9f-p9kz6\" (UID: \"c709a691-8375-4f1f-8552-12e96c2da0a8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.772127 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlp84\" (UniqueName: \"kubernetes.io/projected/c5267884-745c-47aa-a389-87da2899b706-kube-api-access-wlp84\") pod \"swift-operator-controller-manager-68fc8c869-rhsqf\" (UID: \"c5267884-745c-47aa-a389-87da2899b706\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.772387 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: E0129 16:50:31.772463 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:50:32.772440901 +0000 UTC m=+1285.259644117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.785011 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.790135 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wsgqh"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.798322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwww5\" (UniqueName: \"kubernetes.io/projected/c709a691-8375-4f1f-8552-12e96c2da0a8-kube-api-access-dwww5\") pod \"test-operator-controller-manager-56f8bfcd9f-p9kz6\" (UID: \"c709a691-8375-4f1f-8552-12e96c2da0a8\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.803358 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlp84\" (UniqueName: \"kubernetes.io/projected/c5267884-745c-47aa-a389-87da2899b706-kube-api-access-wlp84\") pod \"swift-operator-controller-manager-68fc8c869-rhsqf\" (UID: \"c5267884-745c-47aa-a389-87da2899b706\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.817849 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsm9\" (UniqueName: \"kubernetes.io/projected/123e5d6b-93c9-452c-9c77-81333f65487d-kube-api-access-nmsm9\") pod \"telemetry-operator-controller-manager-64b5b76f97-zhpjf\" (UID: \"123e5d6b-93c9-452c-9c77-81333f65487d\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.836264 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.837247 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.840073 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.843505 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.845434 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.845477 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.847347 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j6x9f" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.852215 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.874101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtsh8\" (UniqueName: \"kubernetes.io/projected/a3fcd8e8-4123-4172-a3f0-86696ee14d71-kube-api-access-dtsh8\") pod \"watcher-operator-controller-manager-564965969-wsgqh\" (UID: \"a3fcd8e8-4123-4172-a3f0-86696ee14d71\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.874658 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.875591 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.882605 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wx7xs" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.885384 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm"] Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.901017 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.924872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.957770 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.975800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzsd\" (UniqueName: \"kubernetes.io/projected/1f535307-b492-4576-875c-387684d62a42-kube-api-access-cxzsd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hvxtm\" (UID: \"1f535307-b492-4576-875c-387684d62a42\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.975865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgfgc\" (UniqueName: \"kubernetes.io/projected/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-kube-api-access-jgfgc\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.975921 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtsh8\" (UniqueName: \"kubernetes.io/projected/a3fcd8e8-4123-4172-a3f0-86696ee14d71-kube-api-access-dtsh8\") pod \"watcher-operator-controller-manager-564965969-wsgqh\" (UID: \"a3fcd8e8-4123-4172-a3f0-86696ee14d71\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.975956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:31 crc kubenswrapper[4813]: I0129 16:50:31.975971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.000853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtsh8\" (UniqueName: \"kubernetes.io/projected/a3fcd8e8-4123-4172-a3f0-86696ee14d71-kube-api-access-dtsh8\") pod \"watcher-operator-controller-manager-564965969-wsgqh\" (UID: \"a3fcd8e8-4123-4172-a3f0-86696ee14d71\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.047466 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.076745 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxzsd\" (UniqueName: \"kubernetes.io/projected/1f535307-b492-4576-875c-387684d62a42-kube-api-access-cxzsd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hvxtm\" (UID: \"1f535307-b492-4576-875c-387684d62a42\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.076802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgfgc\" (UniqueName: \"kubernetes.io/projected/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-kube-api-access-jgfgc\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.076857 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.076876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.077278 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.077322 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.077350 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:32.577330734 +0000 UTC m=+1285.064533950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.077377 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:32.577358404 +0000 UTC m=+1285.064561620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.098979 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxzsd\" (UniqueName: \"kubernetes.io/projected/1f535307-b492-4576-875c-387684d62a42-kube-api-access-cxzsd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hvxtm\" (UID: \"1f535307-b492-4576-875c-387684d62a42\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.099623 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgfgc\" (UniqueName: \"kubernetes.io/projected/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-kube-api-access-jgfgc\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.132814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.178771 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.178971 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.179065 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:50:33.17904046 +0000 UTC m=+1285.666243676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.208354 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd"] Jan 29 16:50:32 crc kubenswrapper[4813]: W0129 16:50:32.256836 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7c7a81c_3f15_493f_b7cd_97486af5c4a8.slice/crio-0d65f5fc7f8173f87db53e25571cb6549f023e1d55e37814654312fd9caac096 WatchSource:0}: Error finding container 0d65f5fc7f8173f87db53e25571cb6549f023e1d55e37814654312fd9caac096: Status 404 returned error can't find the container with id 0d65f5fc7f8173f87db53e25571cb6549f023e1d55e37814654312fd9caac096 Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.290756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.552474 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.585849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.585896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.586211 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.586222 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.586294 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:33.586269647 +0000 UTC m=+1286.073472873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.586324 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:33.586304898 +0000 UTC m=+1286.073508114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.638270 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" event={"ID":"e5d8b964-7415-4684-85a2-2eb6ba75acb6","Type":"ContainerStarted","Data":"995667cea1c72d5162b5bb94bf83a8e51818921d47df004c54e9e7930741b758"} Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.645052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" event={"ID":"d7c7a81c-3f15-493f-b7cd-97486af5c4a8","Type":"ContainerStarted","Data":"0d65f5fc7f8173f87db53e25571cb6549f023e1d55e37814654312fd9caac096"} Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.648217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" event={"ID":"a5517fc3-4c76-4ecc-bfca-240cbb5877af","Type":"ContainerStarted","Data":"5118fba68af69481877f1ce820951806d088727a5a03c52b0e29d1e84576bf39"} Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.788490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.788706 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: E0129 16:50:32.788946 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:50:34.788919898 +0000 UTC m=+1287.276123204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.888669 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.911274 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.931098 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.952987 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp"] Jan 29 16:50:32 crc kubenswrapper[4813]: I0129 16:50:32.974363 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:32.998440 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr"] Jan 29 16:50:33 crc kubenswrapper[4813]: W0129 16:50:33.008831 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7614801_3c1c_43c9_b270_8121ef10bb6f.slice/crio-8ab759e65143cccffda08b60b3793858e8e7aa421563e988d644d6f3754a8cf0 WatchSource:0}: Error finding container 8ab759e65143cccffda08b60b3793858e8e7aa421563e988d644d6f3754a8cf0: Status 404 returned error can't find the container with id 8ab759e65143cccffda08b60b3793858e8e7aa421563e988d644d6f3754a8cf0 Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.012085 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.026143 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.036344 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.052449 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.065289 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.072492 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.083159 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.089639 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-wsgqh"] Jan 29 16:50:33 crc kubenswrapper[4813]: W0129 16:50:33.113138 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ec4543_cad4_4120_addd_0aeb8756eaaf.slice/crio-bd8b26832c8bd4a627f64e8ad0b15f6d9e390e69f7d727416715240770c41b2d WatchSource:0}: Error finding container bd8b26832c8bd4a627f64e8ad0b15f6d9e390e69f7d727416715240770c41b2d: Status 404 returned error can't find the container with id bd8b26832c8bd4a627f64e8ad0b15f6d9e390e69f7d727416715240770c41b2d Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.129654 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtsh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-wsgqh_openstack-operators(a3fcd8e8-4123-4172-a3f0-86696ee14d71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.131196 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" podUID="a3fcd8e8-4123-4172-a3f0-86696ee14d71" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.134485 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkqck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d9697b7f4-fq8n9_openstack-operators(73e44ad8-56ed-43ed-8eed-d466ad56e480): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.138488 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" podUID="73e44ad8-56ed-43ed-8eed-d466ad56e480" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.169375 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xsfgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-dk624_openstack-operators(79ec4543-cad4-4120-addd-0aeb8756eaaf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.170988 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" podUID="79ec4543-cad4-4120-addd-0aeb8756eaaf" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.185001 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm"] Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.188693 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwww5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-p9kz6_openstack-operators(c709a691-8375-4f1f-8552-12e96c2da0a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.189824 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" podUID="c709a691-8375-4f1f-8552-12e96c2da0a8" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.192212 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6"] Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.195908 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf"] Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.198248 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cxzsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hvxtm_openstack-operators(1f535307-b492-4576-875c-387684d62a42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.200156 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" podUID="1f535307-b492-4576-875c-387684d62a42" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.203490 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.203774 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.203835 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:50:35.203816867 +0000 UTC m=+1287.691020083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.206861 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlp84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-rhsqf_openstack-operators(c5267884-745c-47aa-a389-87da2899b706): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.208166 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" podUID="c5267884-745c-47aa-a389-87da2899b706" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.608407 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.608724 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.608874 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.608932 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:35.608912572 +0000 UTC m=+1288.096115788 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.609360 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.609391 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:35.609382245 +0000 UTC m=+1288.096585461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.662197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" event={"ID":"8fb27cac-ef23-4b14-bee3-7b69233d9cdc","Type":"ContainerStarted","Data":"79bc86f486c6178ce07d3efce0587f7a29f4dcf8b6fd301793b720a3ff53928d"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.664162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" event={"ID":"c709a691-8375-4f1f-8552-12e96c2da0a8","Type":"ContainerStarted","Data":"40c2a7adc0e5a06bf5f8a6053db2b2ba1d9f858f4869cec6d63351704254766c"} Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.665912 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" podUID="c709a691-8375-4f1f-8552-12e96c2da0a8" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.666903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" event={"ID":"4fa45b77-7f33-448e-9855-5f9f5117bf82","Type":"ContainerStarted","Data":"d70201151e62262e1bf36f31e07ef9f6f810fd32716c3081d19af10d4262090f"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.670825 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" event={"ID":"123e5d6b-93c9-452c-9c77-81333f65487d","Type":"ContainerStarted","Data":"834410c012c620e1e2487f627fe073dfe0b7826f00a543ba20a5ad58f003df39"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.673715 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" event={"ID":"53f25c98-8891-4e13-afaa-4cbae4bf1c7e","Type":"ContainerStarted","Data":"932be86c4b952b4be50e8698f72008dfda5c4f1f7d96ee0ac83f82a164138b7d"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.675249 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" event={"ID":"d81220d4-722d-4fc5-9626-826d1eccc841","Type":"ContainerStarted","Data":"63fa17952bd89faf6afcb3dd09a18a0e21d9d580c8bb6f28d4016246ffdb3ab4"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.680715 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" event={"ID":"c5267884-745c-47aa-a389-87da2899b706","Type":"ContainerStarted","Data":"e02b3d382b57cde70bbbe5c6cfac8d137cb1b1ccd0c965f2d86cd5cdf8539581"} Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.695427 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" podUID="c5267884-745c-47aa-a389-87da2899b706" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.702362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" event={"ID":"b24c6187-bc48-4261-b142-2269f470e58a","Type":"ContainerStarted","Data":"14c009833556d133e95b82847ef0f2fe90164345a882797371d752321784be30"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.708650 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" event={"ID":"74c64061-8f84-45f4-813c-028126b44630","Type":"ContainerStarted","Data":"eb78e94babae3564c2d3efb9c12232bd67be3e91edc3a6234c5a61880b999ff7"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.711675 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" event={"ID":"1f535307-b492-4576-875c-387684d62a42","Type":"ContainerStarted","Data":"81323c42c73750ca55e40d427755f9c03312729d5690c08a38a076d7e2d78d9a"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.714240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" event={"ID":"b67ed668-3ff7-416c-8014-1a5f9668b54c","Type":"ContainerStarted","Data":"7832848ef5236b00f6afda0a40c2ae05228bfa8947ce65e0a818cf6803813727"} Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.714657 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" podUID="1f535307-b492-4576-875c-387684d62a42" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.715446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" event={"ID":"79ec4543-cad4-4120-addd-0aeb8756eaaf","Type":"ContainerStarted","Data":"bd8b26832c8bd4a627f64e8ad0b15f6d9e390e69f7d727416715240770c41b2d"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.716665 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" event={"ID":"73e44ad8-56ed-43ed-8eed-d466ad56e480","Type":"ContainerStarted","Data":"465652750b4f4c391669f71c39c1d11c62e6e4e4578a2c6f0aa8d5d5b34af356"} Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.717343 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" podUID="79ec4543-cad4-4120-addd-0aeb8756eaaf" Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.719249 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" podUID="73e44ad8-56ed-43ed-8eed-d466ad56e480" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.719384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" event={"ID":"f6c9bd68-507b-4ecc-a1cd-e88ab1e96727","Type":"ContainerStarted","Data":"0f010b0dfc73e1cfd4e26f53b049442ea5ea0a5a93295cfcf1d47b3790d32641"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.721026 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" event={"ID":"6ca5d465-220c-4c51-a5be-c304ddec9e48","Type":"ContainerStarted","Data":"9180775d77ceaa78fc11fb0c5066124052d0834429d22aa607c55716211ddb25"} Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.722921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" event={"ID":"a3fcd8e8-4123-4172-a3f0-86696ee14d71","Type":"ContainerStarted","Data":"b8d5d48202f6a65235915baba21117147a47108000d48448dedf99fe814a3422"} Jan 29 16:50:33 crc kubenswrapper[4813]: E0129 16:50:33.725541 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" podUID="a3fcd8e8-4123-4172-a3f0-86696ee14d71" Jan 29 16:50:33 crc kubenswrapper[4813]: I0129 16:50:33.728189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" event={"ID":"b7614801-3c1c-43c9-b270-8121ef10bb6f","Type":"ContainerStarted","Data":"8ab759e65143cccffda08b60b3793858e8e7aa421563e988d644d6f3754a8cf0"} Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752348 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" podUID="c5267884-745c-47aa-a389-87da2899b706" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752423 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" podUID="a3fcd8e8-4123-4172-a3f0-86696ee14d71" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752512 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" podUID="c709a691-8375-4f1f-8552-12e96c2da0a8" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752636 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" podUID="1f535307-b492-4576-875c-387684d62a42" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752717 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" podUID="79ec4543-cad4-4120-addd-0aeb8756eaaf" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.752782 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:d9f6f8dc6a6dd9b0d7c96e4c89b3056291fd61f11126a1304256a4d6cacd0382\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" podUID="73e44ad8-56ed-43ed-8eed-d466ad56e480" Jan 29 16:50:34 crc kubenswrapper[4813]: I0129 16:50:34.838456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.838677 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:34 crc kubenswrapper[4813]: E0129 16:50:34.838751 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:50:38.838727887 +0000 UTC m=+1291.325931103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: I0129 16:50:35.271302 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.271740 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.271783 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:50:39.271768682 +0000 UTC m=+1291.758971898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: I0129 16:50:35.685053 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:35 crc kubenswrapper[4813]: I0129 16:50:35.685145 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.685325 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.685375 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.685431 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:39.685405314 +0000 UTC m=+1292.172608700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:35 crc kubenswrapper[4813]: E0129 16:50:35.685461 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:39.685449085 +0000 UTC m=+1292.172652511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:38 crc kubenswrapper[4813]: I0129 16:50:38.845405 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:38 crc kubenswrapper[4813]: E0129 16:50:38.845664 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:38 crc kubenswrapper[4813]: E0129 16:50:38.846242 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:50:46.846207498 +0000 UTC m=+1299.333410734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: I0129 16:50:39.353661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.354810 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.354909 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:50:47.354886394 +0000 UTC m=+1299.842089600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: I0129 16:50:39.759578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:39 crc kubenswrapper[4813]: I0129 16:50:39.759635 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.759789 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.759870 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:47.759849055 +0000 UTC m=+1300.247052281 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.759890 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:39 crc kubenswrapper[4813]: E0129 16:50:39.759998 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:50:47.759977509 +0000 UTC m=+1300.247180715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:46 crc kubenswrapper[4813]: I0129 16:50:46.882059 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:50:46 crc kubenswrapper[4813]: E0129 16:50:46.882269 4813 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:46 crc kubenswrapper[4813]: E0129 16:50:46.883217 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert podName:239ca114-1b8f-447f-968e-83c058fb678e nodeName:}" failed. No retries permitted until 2026-01-29 16:51:02.883189618 +0000 UTC m=+1315.370392834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert") pod "infra-operator-controller-manager-79955696d6-wmz96" (UID: "239ca114-1b8f-447f-968e-83c058fb678e") : secret "infra-operator-webhook-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: I0129 16:50:47.391071 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.391352 4813 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.391479 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert podName:30cb06d1-8e0a-4702-bd9c-42561afd684c nodeName:}" failed. No retries permitted until 2026-01-29 16:51:03.391448011 +0000 UTC m=+1315.878651387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" (UID: "30cb06d1-8e0a-4702-bd9c-42561afd684c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: I0129 16:50:47.798195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:47 crc kubenswrapper[4813]: I0129 16:50:47.798251 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.798513 4813 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.798648 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:51:03.798619696 +0000 UTC m=+1316.285822912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "webhook-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.798879 4813 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 16:50:47 crc kubenswrapper[4813]: E0129 16:50:47.799036 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs podName:69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b nodeName:}" failed. No retries permitted until 2026-01-29 16:51:03.799011728 +0000 UTC m=+1316.286214954 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-k4455" (UID: "69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b") : secret "metrics-server-cert" not found Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.123253 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.123877 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d84pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-8k6nd_openstack-operators(6ca5d465-220c-4c51-a5be-c304ddec9e48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.125070 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" podUID="6ca5d465-220c-4c51-a5be-c304ddec9e48" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.775436 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.775925 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmsm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-zhpjf_openstack-operators(123e5d6b-93c9-452c-9c77-81333f65487d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.777741 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" podUID="123e5d6b-93c9-452c-9c77-81333f65487d" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.881091 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" podUID="123e5d6b-93c9-452c-9c77-81333f65487d" Jan 29 16:50:54 crc kubenswrapper[4813]: E0129 16:50:54.881624 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" podUID="6ca5d465-220c-4c51-a5be-c304ddec9e48" Jan 29 16:50:55 crc kubenswrapper[4813]: E0129 16:50:55.436874 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 16:50:55 crc kubenswrapper[4813]: E0129 16:50:55.437143 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4gpx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-94b2b_openstack-operators(b24c6187-bc48-4261-b142-2269f470e58a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:55 crc kubenswrapper[4813]: E0129 16:50:55.439583 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" podUID="b24c6187-bc48-4261-b142-2269f470e58a" Jan 29 16:50:55 crc kubenswrapper[4813]: E0129 16:50:55.886444 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" podUID="b24c6187-bc48-4261-b142-2269f470e58a" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.028147 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.028336 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p4xn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-jssxt_openstack-operators(53f25c98-8891-4e13-afaa-4cbae4bf1c7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.030145 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" podUID="53f25c98-8891-4e13-afaa-4cbae4bf1c7e" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.723053 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.723267 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9c5kx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-wmqxl_openstack-operators(74c64061-8f84-45f4-813c-028126b44630): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.724890 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" podUID="74c64061-8f84-45f4-813c-028126b44630" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.893257 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" podUID="74c64061-8f84-45f4-813c-028126b44630" Jan 29 16:50:56 crc kubenswrapper[4813]: E0129 16:50:56.893533 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" podUID="53f25c98-8891-4e13-afaa-4cbae4bf1c7e" Jan 29 16:50:57 crc kubenswrapper[4813]: E0129 16:50:57.896339 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 29 16:50:57 crc kubenswrapper[4813]: E0129 16:50:57.896508 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5qdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-99psd_openstack-operators(d7c7a81c-3f15-493f-b7cd-97486af5c4a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:57 crc kubenswrapper[4813]: E0129 16:50:57.898178 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" podUID="d7c7a81c-3f15-493f-b7cd-97486af5c4a8" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.443397 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.443570 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xt6m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-mhdmm_openstack-operators(d81220d4-722d-4fc5-9626-826d1eccc841): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.445651 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" podUID="d81220d4-722d-4fc5-9626-826d1eccc841" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.905650 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" podUID="d81220d4-722d-4fc5-9626-826d1eccc841" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.907765 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" podUID="d7c7a81c-3f15-493f-b7cd-97486af5c4a8" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.927616 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.927834 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4zglr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-qrrsx_openstack-operators(e5d8b964-7415-4684-85a2-2eb6ba75acb6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:50:58 crc kubenswrapper[4813]: E0129 16:50:58.928957 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" podUID="e5d8b964-7415-4684-85a2-2eb6ba75acb6" Jan 29 16:50:59 crc kubenswrapper[4813]: E0129 16:50:59.909674 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" podUID="e5d8b964-7415-4684-85a2-2eb6ba75acb6" Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.240911 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.240971 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.262784 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.263267 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.263324 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d" gracePeriod=600 Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.916858 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d" exitCode=0 Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.916904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d"} Jan 29 16:51:00 crc kubenswrapper[4813]: I0129 16:51:00.916943 4813 scope.go:117] "RemoveContainer" containerID="7f51954817b4222a923a0139a8498177396724dcae69eeee154578e427cd34a8" Jan 29 16:51:02 crc kubenswrapper[4813]: I0129 16:51:02.933748 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:51:02 crc kubenswrapper[4813]: I0129 16:51:02.954736 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/239ca114-1b8f-447f-968e-83c058fb678e-cert\") pod \"infra-operator-controller-manager-79955696d6-wmz96\" (UID: \"239ca114-1b8f-447f-968e-83c058fb678e\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:51:02 crc kubenswrapper[4813]: I0129 16:51:02.956486 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.441404 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.477232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30cb06d1-8e0a-4702-bd9c-42561afd684c-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r\" (UID: \"30cb06d1-8e0a-4702-bd9c-42561afd684c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.521824 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wmz96"] Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.561553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.856527 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.856893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.869253 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.883262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-k4455\" (UID: \"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.966405 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" event={"ID":"c5267884-745c-47aa-a389-87da2899b706","Type":"ContainerStarted","Data":"70c10633455e5b1a6581f8a009bbb84d06bc2dac59a0480686a4f89e5b88e115"} Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.967496 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.974584 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" event={"ID":"8fb27cac-ef23-4b14-bee3-7b69233d9cdc","Type":"ContainerStarted","Data":"27586380b87ca9b13ed9ff1ef3d4e42a8c7096df63b21c13e05846d8e66be23a"} Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.975078 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.995876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" event={"ID":"a5517fc3-4c76-4ecc-bfca-240cbb5877af","Type":"ContainerStarted","Data":"e1bbb05feb1fa86506c54d3de28e0a08cb24373a696e9737647f45f95be5daf5"} Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.996478 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" podStartSLOduration=3.00724078 podStartE2EDuration="32.996462403s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.206633288 +0000 UTC m=+1285.693836504" lastFinishedPulling="2026-01-29 16:51:03.195854911 +0000 UTC m=+1315.683058127" observedRunningTime="2026-01-29 16:51:03.995554287 +0000 UTC m=+1316.482757503" watchObservedRunningTime="2026-01-29 16:51:03.996462403 +0000 UTC m=+1316.483665619" Jan 29 16:51:03 crc kubenswrapper[4813]: I0129 16:51:03.997871 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.002585 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" event={"ID":"b7614801-3c1c-43c9-b270-8121ef10bb6f","Type":"ContainerStarted","Data":"478cbfa107541646884303eef0937dbb03bbeb5daeb43d581c16dd280d25d51f"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.003345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.069326 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" event={"ID":"79ec4543-cad4-4120-addd-0aeb8756eaaf","Type":"ContainerStarted","Data":"8db9f6227b3348c32eedc268370c1283e389cf03b5efc11582074296ed953f7d"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.070345 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.084227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" event={"ID":"4fa45b77-7f33-448e-9855-5f9f5117bf82","Type":"ContainerStarted","Data":"e2b5afa0127b163c09831d6292fea49a46a03113279343879f505758c018f93e"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.084549 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.132862 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" podStartSLOduration=4.94175175 podStartE2EDuration="33.132844744s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.974693509 +0000 UTC m=+1285.461896725" lastFinishedPulling="2026-01-29 16:51:01.165786503 +0000 UTC m=+1313.652989719" observedRunningTime="2026-01-29 16:51:04.058432559 +0000 UTC m=+1316.545635765" watchObservedRunningTime="2026-01-29 16:51:04.132844744 +0000 UTC m=+1316.620047960" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.152398 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.154507 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" podStartSLOduration=5.007970728 podStartE2EDuration="33.154495191s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.019330672 +0000 UTC m=+1285.506533888" lastFinishedPulling="2026-01-29 16:51:01.165855135 +0000 UTC m=+1313.653058351" observedRunningTime="2026-01-29 16:51:04.138092806 +0000 UTC m=+1316.625296022" watchObservedRunningTime="2026-01-29 16:51:04.154495191 +0000 UTC m=+1316.641698407" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.156125 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" event={"ID":"239ca114-1b8f-447f-968e-83c058fb678e","Type":"ContainerStarted","Data":"5930ac7b0eaffb70bf03cfe1ab26ab0721d28377869376c1e485b7f3630e5a91"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.170064 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" event={"ID":"f6c9bd68-507b-4ecc-a1cd-e88ab1e96727","Type":"ContainerStarted","Data":"dab8a6b5e4353d79211d6c1554c7697b705f6c14376c3f6c05c6363f466707a1"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.170226 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.180155 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.183089 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" event={"ID":"b67ed668-3ff7-416c-8014-1a5f9668b54c","Type":"ContainerStarted","Data":"cb9b6142ede99b78a71a34d51c3a4ac6bb566383e9d287b0b45a71d466de3839"} Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.183907 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.198726 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" podStartSLOduration=5.198601812 podStartE2EDuration="34.198700392s" podCreationTimestamp="2026-01-29 16:50:30 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.168209606 +0000 UTC m=+1284.655412822" lastFinishedPulling="2026-01-29 16:51:01.168308186 +0000 UTC m=+1313.655511402" observedRunningTime="2026-01-29 16:51:04.169456175 +0000 UTC m=+1316.656659391" watchObservedRunningTime="2026-01-29 16:51:04.198700392 +0000 UTC m=+1316.685903608" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.210317 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r"] Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.225173 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" podStartSLOduration=3.2064887029999998 podStartE2EDuration="33.225153688s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.168407991 +0000 UTC m=+1285.655611207" lastFinishedPulling="2026-01-29 16:51:03.187072976 +0000 UTC m=+1315.674276192" observedRunningTime="2026-01-29 16:51:04.210156914 +0000 UTC m=+1316.697360130" watchObservedRunningTime="2026-01-29 16:51:04.225153688 +0000 UTC m=+1316.712356904" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.247450 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" podStartSLOduration=5.210805995 podStartE2EDuration="33.247432824s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.129274377 +0000 UTC m=+1285.616477593" lastFinishedPulling="2026-01-29 16:51:01.165901216 +0000 UTC m=+1313.653104422" observedRunningTime="2026-01-29 16:51:04.244440147 +0000 UTC m=+1316.731643373" watchObservedRunningTime="2026-01-29 16:51:04.247432824 +0000 UTC m=+1316.734636040" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.305692 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" podStartSLOduration=5.143233137 podStartE2EDuration="33.305674491s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.005865252 +0000 UTC m=+1285.493068458" lastFinishedPulling="2026-01-29 16:51:01.168306596 +0000 UTC m=+1313.655509812" observedRunningTime="2026-01-29 16:51:04.305248089 +0000 UTC m=+1316.792451305" watchObservedRunningTime="2026-01-29 16:51:04.305674491 +0000 UTC m=+1316.792877707" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.417497 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" podStartSLOduration=5.316473636 podStartE2EDuration="33.41746992s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.064860271 +0000 UTC m=+1285.552063487" lastFinishedPulling="2026-01-29 16:51:01.165856555 +0000 UTC m=+1313.653059771" observedRunningTime="2026-01-29 16:51:04.381960141 +0000 UTC m=+1316.869163367" watchObservedRunningTime="2026-01-29 16:51:04.41746992 +0000 UTC m=+1316.904673136" Jan 29 16:51:04 crc kubenswrapper[4813]: I0129 16:51:04.997193 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455"] Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.201664 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" event={"ID":"30cb06d1-8e0a-4702-bd9c-42561afd684c","Type":"ContainerStarted","Data":"0e38243c2b897ccd7f6b77561df015acc9a59c370a8756c6058a0c1f0eaa8526"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.203000 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" event={"ID":"1f535307-b492-4576-875c-387684d62a42","Type":"ContainerStarted","Data":"1fff2474551835b1473107cd9822cd9983d1bc7b2f30b293fe933de43c641a11"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.206311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" event={"ID":"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b","Type":"ContainerStarted","Data":"0053645eddb857e81944136b69a38a3e5bebead376d0822764276f86c031beb4"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.211867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" event={"ID":"73e44ad8-56ed-43ed-8eed-d466ad56e480","Type":"ContainerStarted","Data":"4f7dddb5072713721800e0356a441cccd8ad474e444730fb611d72b085d7337a"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.212442 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.219528 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" event={"ID":"a3fcd8e8-4123-4172-a3f0-86696ee14d71","Type":"ContainerStarted","Data":"f46ceff055b2fc327adf3c735aef2cc79d4f561b7e064b09bfe7dc1e54ba0173"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.220325 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.226046 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hvxtm" podStartSLOduration=4.1422433 podStartE2EDuration="34.226028912s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.198135182 +0000 UTC m=+1285.685338398" lastFinishedPulling="2026-01-29 16:51:03.281920794 +0000 UTC m=+1315.769124010" observedRunningTime="2026-01-29 16:51:05.223746576 +0000 UTC m=+1317.710949792" watchObservedRunningTime="2026-01-29 16:51:05.226028912 +0000 UTC m=+1317.713232148" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.228438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" event={"ID":"c709a691-8375-4f1f-8552-12e96c2da0a8","Type":"ContainerStarted","Data":"6c9c2f7b4475abde4035e51134d7f773959487dec2109ffc7337fd7511c7af98"} Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.228804 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.246922 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" podStartSLOduration=5.194197786 podStartE2EDuration="35.246903067s" podCreationTimestamp="2026-01-29 16:50:30 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.134326644 +0000 UTC m=+1285.621529860" lastFinishedPulling="2026-01-29 16:51:03.187031925 +0000 UTC m=+1315.674235141" observedRunningTime="2026-01-29 16:51:05.24252721 +0000 UTC m=+1317.729730446" watchObservedRunningTime="2026-01-29 16:51:05.246903067 +0000 UTC m=+1317.734106283" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.263005 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" podStartSLOduration=4.204552745 podStartE2EDuration="34.262987343s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.129522234 +0000 UTC m=+1285.616725450" lastFinishedPulling="2026-01-29 16:51:03.187956832 +0000 UTC m=+1315.675160048" observedRunningTime="2026-01-29 16:51:05.261560202 +0000 UTC m=+1317.748763418" watchObservedRunningTime="2026-01-29 16:51:05.262987343 +0000 UTC m=+1317.750190559" Jan 29 16:51:05 crc kubenswrapper[4813]: I0129 16:51:05.278863 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" podStartSLOduration=4.25029574 podStartE2EDuration="34.278843582s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.188580205 +0000 UTC m=+1285.675783421" lastFinishedPulling="2026-01-29 16:51:03.217128047 +0000 UTC m=+1315.704331263" observedRunningTime="2026-01-29 16:51:05.278360038 +0000 UTC m=+1317.765563244" watchObservedRunningTime="2026-01-29 16:51:05.278843582 +0000 UTC m=+1317.766046798" Jan 29 16:51:06 crc kubenswrapper[4813]: I0129 16:51:06.252713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" event={"ID":"69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b","Type":"ContainerStarted","Data":"cc6da997bb3633df159c5d1c9f666524eac150343cedbb50215a5ebf73990682"} Jan 29 16:51:07 crc kubenswrapper[4813]: I0129 16:51:07.264767 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:07 crc kubenswrapper[4813]: I0129 16:51:07.268946 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" podStartSLOduration=36.268930482 podStartE2EDuration="36.268930482s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:51:06.282680661 +0000 UTC m=+1318.769883877" watchObservedRunningTime="2026-01-29 16:51:07.268930482 +0000 UTC m=+1319.756133698" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.274730 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" event={"ID":"239ca114-1b8f-447f-968e-83c058fb678e","Type":"ContainerStarted","Data":"88a5b986e0ce7bfd8719a315902750ec2b74d50f1769b498e980004a0a296731"} Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.275482 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.277551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" event={"ID":"74c64061-8f84-45f4-813c-028126b44630","Type":"ContainerStarted","Data":"463d91d14713a2877f53e93668a9a71d38d0ecad69a27e745120bc96d7dd04e9"} Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.277911 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.278857 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" event={"ID":"30cb06d1-8e0a-4702-bd9c-42561afd684c","Type":"ContainerStarted","Data":"cd80ab681baaef88b079e34d7fd18b4cb30b84c7975781ac860c964336449498"} Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.279123 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.324217 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" podStartSLOduration=33.65944093 podStartE2EDuration="37.324202942s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:51:03.551610487 +0000 UTC m=+1316.038813703" lastFinishedPulling="2026-01-29 16:51:07.216372499 +0000 UTC m=+1319.703575715" observedRunningTime="2026-01-29 16:51:08.318561028 +0000 UTC m=+1320.805764244" watchObservedRunningTime="2026-01-29 16:51:08.324202942 +0000 UTC m=+1320.811406158" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.362618 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" podStartSLOduration=2.836472494 podStartE2EDuration="37.362595864s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.128712421 +0000 UTC m=+1285.615915637" lastFinishedPulling="2026-01-29 16:51:07.654835791 +0000 UTC m=+1320.142039007" observedRunningTime="2026-01-29 16:51:08.336813627 +0000 UTC m=+1320.824016843" watchObservedRunningTime="2026-01-29 16:51:08.362595864 +0000 UTC m=+1320.849799080" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.365450 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" podStartSLOduration=34.415795889 podStartE2EDuration="37.36523284s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:51:04.281284664 +0000 UTC m=+1316.768487880" lastFinishedPulling="2026-01-29 16:51:07.230721615 +0000 UTC m=+1319.717924831" observedRunningTime="2026-01-29 16:51:08.359619428 +0000 UTC m=+1320.846822644" watchObservedRunningTime="2026-01-29 16:51:08.36523284 +0000 UTC m=+1320.852436056" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.609190 4813 scope.go:117] "RemoveContainer" containerID="807cb62c35fb102575aa62474a4dd6bb297ab7db132982b6840e21c355f0ad13" Jan 29 16:51:08 crc kubenswrapper[4813]: I0129 16:51:08.630647 4813 scope.go:117] "RemoveContainer" containerID="9932fd7bc5aa0212b38b4989166d06e54d6e2ffee528d84858483778964e1333" Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.295764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" event={"ID":"123e5d6b-93c9-452c-9c77-81333f65487d","Type":"ContainerStarted","Data":"6d0b4fc1a35edf9b6856dc4bd09daefc2193e0639ad19b871dcb5a14638fd7bd"} Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.296162 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.298750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" event={"ID":"6ca5d465-220c-4c51-a5be-c304ddec9e48","Type":"ContainerStarted","Data":"8d5b66c159029725841098d1cc1bd8076ecf717fe610243a8b9a4dcef42712b9"} Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.298994 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.320385 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" podStartSLOduration=2.730409661 podStartE2EDuration="39.320349477s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.134325363 +0000 UTC m=+1285.621528579" lastFinishedPulling="2026-01-29 16:51:09.724265179 +0000 UTC m=+1322.211468395" observedRunningTime="2026-01-29 16:51:10.313747616 +0000 UTC m=+1322.800950842" watchObservedRunningTime="2026-01-29 16:51:10.320349477 +0000 UTC m=+1322.807552693" Jan 29 16:51:10 crc kubenswrapper[4813]: I0129 16:51:10.333608 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" podStartSLOduration=2.674009107 podStartE2EDuration="39.3335863s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.064508841 +0000 UTC m=+1285.551712057" lastFinishedPulling="2026-01-29 16:51:09.724086034 +0000 UTC m=+1322.211289250" observedRunningTime="2026-01-29 16:51:10.330368777 +0000 UTC m=+1322.817571993" watchObservedRunningTime="2026-01-29 16:51:10.3335863 +0000 UTC m=+1322.820789516" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.333351 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-g5vs4" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.394652 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fq8n9" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.421083 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-txhlr" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.605705 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-gttcx" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.628814 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dk624" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.696995 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-gv495" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.746396 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-h6hbr" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.787997 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-5mtzp" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.905440 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rhsqf" Jan 29 16:51:11 crc kubenswrapper[4813]: I0129 16:51:11.970738 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-p9kz6" Jan 29 16:51:12 crc kubenswrapper[4813]: I0129 16:51:12.298146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-wsgqh" Jan 29 16:51:12 crc kubenswrapper[4813]: I0129 16:51:12.361698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" event={"ID":"53f25c98-8891-4e13-afaa-4cbae4bf1c7e","Type":"ContainerStarted","Data":"649ea1c387bd31806f7228aec534ddc097180b38b9f61ae32eb6fa66eac283dc"} Jan 29 16:51:12 crc kubenswrapper[4813]: I0129 16:51:12.361975 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:51:12 crc kubenswrapper[4813]: I0129 16:51:12.394016 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" podStartSLOduration=2.408078014 podStartE2EDuration="41.393992878s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.930387586 +0000 UTC m=+1285.417590802" lastFinishedPulling="2026-01-29 16:51:11.91630245 +0000 UTC m=+1324.403505666" observedRunningTime="2026-01-29 16:51:12.381258809 +0000 UTC m=+1324.868462065" watchObservedRunningTime="2026-01-29 16:51:12.393992878 +0000 UTC m=+1324.881196094" Jan 29 16:51:12 crc kubenswrapper[4813]: I0129 16:51:12.963252 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wmz96" Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.370652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" event={"ID":"b24c6187-bc48-4261-b142-2269f470e58a","Type":"ContainerStarted","Data":"4f14cc7e05e96912b95135203f0c66a7c6bc67287e8c0604eea77424172b30ca"} Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.370846 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.372594 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" event={"ID":"d7c7a81c-3f15-493f-b7cd-97486af5c4a8","Type":"ContainerStarted","Data":"9f2ad4e4b10403df0df2eda1bdbe7ee9e6a585e9ebd1f09ec2d1d0606f8bbcab"} Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.372990 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.393454 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" podStartSLOduration=2.977356874 podStartE2EDuration="42.393431629s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:33.11245767 +0000 UTC m=+1285.599660886" lastFinishedPulling="2026-01-29 16:51:12.528532425 +0000 UTC m=+1325.015735641" observedRunningTime="2026-01-29 16:51:13.389216587 +0000 UTC m=+1325.876419813" watchObservedRunningTime="2026-01-29 16:51:13.393431629 +0000 UTC m=+1325.880634845" Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.415219 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" podStartSLOduration=2.71099963 podStartE2EDuration="43.41519538s" podCreationTimestamp="2026-01-29 16:50:30 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.27430747 +0000 UTC m=+1284.761510706" lastFinishedPulling="2026-01-29 16:51:12.97850324 +0000 UTC m=+1325.465706456" observedRunningTime="2026-01-29 16:51:13.415139778 +0000 UTC m=+1325.902342994" watchObservedRunningTime="2026-01-29 16:51:13.41519538 +0000 UTC m=+1325.902398596" Jan 29 16:51:13 crc kubenswrapper[4813]: I0129 16:51:13.568601 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r" Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.160002 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-k4455" Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.380952 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" event={"ID":"e5d8b964-7415-4684-85a2-2eb6ba75acb6","Type":"ContainerStarted","Data":"6e402877a1a895714f0176bbe004c7f15571d393868918a06b062b62b6c35a79"} Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.381478 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.383992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" event={"ID":"d81220d4-722d-4fc5-9626-826d1eccc841","Type":"ContainerStarted","Data":"74513fbd221e3443b1fd80f6e22616ed96a75879c044fb992e5a1ea04c256913"} Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.384379 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.409501 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" podStartSLOduration=2.256283297 podStartE2EDuration="43.409473313s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.566300839 +0000 UTC m=+1285.053504055" lastFinishedPulling="2026-01-29 16:51:13.719490855 +0000 UTC m=+1326.206694071" observedRunningTime="2026-01-29 16:51:14.399941797 +0000 UTC m=+1326.887145023" watchObservedRunningTime="2026-01-29 16:51:14.409473313 +0000 UTC m=+1326.896676539" Jan 29 16:51:14 crc kubenswrapper[4813]: I0129 16:51:14.430802 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" podStartSLOduration=2.74658872 podStartE2EDuration="43.43078054s" podCreationTimestamp="2026-01-29 16:50:31 +0000 UTC" firstStartedPulling="2026-01-29 16:50:32.975100631 +0000 UTC m=+1285.462303847" lastFinishedPulling="2026-01-29 16:51:13.659292451 +0000 UTC m=+1326.146495667" observedRunningTime="2026-01-29 16:51:14.426091534 +0000 UTC m=+1326.913294750" watchObservedRunningTime="2026-01-29 16:51:14.43078054 +0000 UTC m=+1326.917983766" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.355839 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-99psd" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.408044 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-qrrsx" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.667028 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-wmqxl" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.712911 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-94b2b" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.753963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-jssxt" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.845681 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-mhdmm" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.849066 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8k6nd" Jan 29 16:51:21 crc kubenswrapper[4813]: I0129 16:51:21.928512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-zhpjf" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.211559 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.213639 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.217428 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.217689 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.218307 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-c8lbx" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.218320 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.226824 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.233770 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xx2\" (UniqueName: \"kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.233868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.306904 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.308022 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.310281 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.324946 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.335180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.335227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.335257 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2xx2\" (UniqueName: \"kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.335297 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v5tt\" (UniqueName: \"kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.335326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.336227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.370319 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2xx2\" (UniqueName: \"kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2\") pod \"dnsmasq-dns-84bb9d8bd9-vfwqn\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.436931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v5tt\" (UniqueName: \"kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.437254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.437278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.438064 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.438062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.454072 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v5tt\" (UniqueName: \"kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt\") pod \"dnsmasq-dns-5f854695bc-fhgxq\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.535494 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:51:37 crc kubenswrapper[4813]: I0129 16:51:37.674185 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:51:38 crc kubenswrapper[4813]: I0129 16:51:38.010109 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:51:38 crc kubenswrapper[4813]: I0129 16:51:38.173314 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:51:38 crc kubenswrapper[4813]: W0129 16:51:38.175827 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70e230f8_1ef0_4bf6_8a9a_42594f3df7bb.slice/crio-0da109f93daa0b31c30eecff0ec65523649c2222fd2a7bd8a34cb4f8cc2a2ddb WatchSource:0}: Error finding container 0da109f93daa0b31c30eecff0ec65523649c2222fd2a7bd8a34cb4f8cc2a2ddb: Status 404 returned error can't find the container with id 0da109f93daa0b31c30eecff0ec65523649c2222fd2a7bd8a34cb4f8cc2a2ddb Jan 29 16:51:38 crc kubenswrapper[4813]: I0129 16:51:38.540222 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" event={"ID":"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb","Type":"ContainerStarted","Data":"0da109f93daa0b31c30eecff0ec65523649c2222fd2a7bd8a34cb4f8cc2a2ddb"} Jan 29 16:51:38 crc kubenswrapper[4813]: I0129 16:51:38.541323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" event={"ID":"be56f14b-0402-44e8-b8b0-e3ee29a65184","Type":"ContainerStarted","Data":"97020889fc392fe01d559e67979e5faaaeea60eca372cb0da5e03e60d609f73f"} Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.119879 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.167237 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.168493 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.189910 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.206821 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.207358 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.208047 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2v2\" (UniqueName: \"kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.309939 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.310009 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w2v2\" (UniqueName: \"kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.310041 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.311181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.311390 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.335487 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w2v2\" (UniqueName: \"kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2\") pod \"dnsmasq-dns-744ffd65bc-655nf\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.497400 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.530036 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.559985 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.561196 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.580462 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.619882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.620404 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.620476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjk4s\" (UniqueName: \"kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.724913 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.724977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjk4s\" (UniqueName: \"kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.725048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.725942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.726161 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.764875 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjk4s\" (UniqueName: \"kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s\") pod \"dnsmasq-dns-95f5f6995-pnjlg\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.937129 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:51:40 crc kubenswrapper[4813]: I0129 16:51:40.946459 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:51:41 crc kubenswrapper[4813]: W0129 16:51:41.021300 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod077036c4_8186_4a4e_b4ec_f9ba9b5532fe.slice/crio-8f9fefabc57b3ea0e9f7d609e63dd8be02eb8e28fade2d59f8595521a3ed2caf WatchSource:0}: Error finding container 8f9fefabc57b3ea0e9f7d609e63dd8be02eb8e28fade2d59f8595521a3ed2caf: Status 404 returned error can't find the container with id 8f9fefabc57b3ea0e9f7d609e63dd8be02eb8e28fade2d59f8595521a3ed2caf Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.357981 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.359559 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.364490 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.364713 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.365519 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.365732 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.365892 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.366130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6gjkx" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.370669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.378973 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.446826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.446885 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.446911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447166 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wdq\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447217 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447271 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.447296 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.493755 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548487 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548547 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548602 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548637 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548665 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548691 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548714 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77wdq\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.548858 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.552006 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.552103 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.552267 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.552512 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.552940 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.554402 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.570648 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.573783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.576780 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.577458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77wdq\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.582922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.583183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.584376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" event={"ID":"077036c4-8186-4a4e-b4ec-f9ba9b5532fe","Type":"ContainerStarted","Data":"8f9fefabc57b3ea0e9f7d609e63dd8be02eb8e28fade2d59f8595521a3ed2caf"} Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.600922 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" event={"ID":"72953957-83b4-4b8b-abd5-a8902227edc7","Type":"ContainerStarted","Data":"9b3828602681ca794358c4becccbc7e3e9905cc83111d8e603bffec67454c165"} Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.688705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.695376 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.698190 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.703579 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nz6r4" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.703839 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.703840 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.704376 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.705644 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.705704 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.705930 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.721081 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758123 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758173 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758198 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758218 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758251 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758291 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758357 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758516 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758549 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.758567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4h6n\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862737 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862782 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862808 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862882 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862906 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4h6n\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862965 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.862984 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.864557 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.864555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.865349 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.866130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.866968 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.868922 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.869130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.869584 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.874692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.878824 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.893311 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4h6n\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:41 crc kubenswrapper[4813]: I0129 16:51:41.924124 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.033883 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.343656 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.617416 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerStarted","Data":"a400750b7cc69e7d79af657f702de65ef2a4874acf812f06fbd3aac6f351109f"} Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.666581 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.902426 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.905590 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.911624 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.912975 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zc6j8" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.913274 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.917641 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.919752 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 16:51:42 crc kubenswrapper[4813]: I0129 16:51:42.920969 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.013926 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.013968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.013989 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.014023 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.014045 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.014080 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.014105 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmsp\" (UniqueName: \"kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.014146 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123070 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123148 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123180 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123221 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123295 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123347 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmsp\" (UniqueName: \"kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.123376 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.125004 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.125782 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.126024 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.126555 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.126805 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.138769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.138863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.152845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmsp\" (UniqueName: \"kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.185461 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.242700 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.626399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerStarted","Data":"e8a7e41dd3cdfd13086c1f136cc2cb4d23fc760fea21525032e09185bbb6dac1"} Jan 29 16:51:43 crc kubenswrapper[4813]: I0129 16:51:43.805285 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:51:43 crc kubenswrapper[4813]: W0129 16:51:43.861361 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09a277e9_f3d7_4499_b29b_ef8788c5e1b0.slice/crio-09ee9b7de4a162868480c4564f5104a66713a75ff63d7cc741bfe0a4f3185625 WatchSource:0}: Error finding container 09ee9b7de4a162868480c4564f5104a66713a75ff63d7cc741bfe0a4f3185625: Status 404 returned error can't find the container with id 09ee9b7de4a162868480c4564f5104a66713a75ff63d7cc741bfe0a4f3185625 Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.356230 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.359994 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.366971 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.410802 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xgq58" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.412204 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.412369 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.412447 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455551 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455579 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455600 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbphm\" (UniqueName: \"kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455650 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455668 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.455697 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.532848 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.538550 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.546305 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.546395 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.546517 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r6pmx" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.552905 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557683 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557713 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbphm\" (UniqueName: \"kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557746 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557780 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557811 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.557847 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.560296 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.560541 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.560763 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.560803 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.568333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.568571 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.577655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.596034 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbphm\" (UniqueName: \"kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.633456 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.642177 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerStarted","Data":"09ee9b7de4a162868480c4564f5104a66713a75ff63d7cc741bfe0a4f3185625"} Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.659526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.659610 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.659652 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnglg\" (UniqueName: \"kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.659724 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.659753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.755713 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.760693 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.760740 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.760784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.760828 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.760862 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnglg\" (UniqueName: \"kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.761863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.762443 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.776869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.804963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnglg\" (UniqueName: \"kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.807789 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " pod="openstack/memcached-0" Jan 29 16:51:44 crc kubenswrapper[4813]: I0129 16:51:44.866839 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:51:45 crc kubenswrapper[4813]: I0129 16:51:45.278216 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:51:45 crc kubenswrapper[4813]: W0129 16:51:45.326690 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392bc7cc_af71_4ee6_b844_e5adeeabba64.slice/crio-0c72a2c16ebdf06dc7c4bb3f3e650b173d9f4cf01570f2f9853480c7034d97c9 WatchSource:0}: Error finding container 0c72a2c16ebdf06dc7c4bb3f3e650b173d9f4cf01570f2f9853480c7034d97c9: Status 404 returned error can't find the container with id 0c72a2c16ebdf06dc7c4bb3f3e650b173d9f4cf01570f2f9853480c7034d97c9 Jan 29 16:51:45 crc kubenswrapper[4813]: I0129 16:51:45.421627 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 16:51:45 crc kubenswrapper[4813]: W0129 16:51:45.494161 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6067c320_64f2_4c71_b4b0_bd136749200f.slice/crio-f5e4e971210574647e0a6a98de87bf483a83477fafefdf2858d6b58021ac8c73 WatchSource:0}: Error finding container f5e4e971210574647e0a6a98de87bf483a83477fafefdf2858d6b58021ac8c73: Status 404 returned error can't find the container with id f5e4e971210574647e0a6a98de87bf483a83477fafefdf2858d6b58021ac8c73 Jan 29 16:51:45 crc kubenswrapper[4813]: I0129 16:51:45.650919 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6067c320-64f2-4c71-b4b0-bd136749200f","Type":"ContainerStarted","Data":"f5e4e971210574647e0a6a98de87bf483a83477fafefdf2858d6b58021ac8c73"} Jan 29 16:51:45 crc kubenswrapper[4813]: I0129 16:51:45.652372 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerStarted","Data":"0c72a2c16ebdf06dc7c4bb3f3e650b173d9f4cf01570f2f9853480c7034d97c9"} Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.464883 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.466468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.472552 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-2jg4d" Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.475695 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.591086 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fgz\" (UniqueName: \"kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz\") pod \"kube-state-metrics-0\" (UID: \"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30\") " pod="openstack/kube-state-metrics-0" Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.692977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fgz\" (UniqueName: \"kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz\") pod \"kube-state-metrics-0\" (UID: \"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30\") " pod="openstack/kube-state-metrics-0" Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.718365 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fgz\" (UniqueName: \"kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz\") pod \"kube-state-metrics-0\" (UID: \"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30\") " pod="openstack/kube-state-metrics-0" Jan 29 16:51:46 crc kubenswrapper[4813]: I0129 16:51:46.794803 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:51:47 crc kubenswrapper[4813]: I0129 16:51:47.551844 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:51:47 crc kubenswrapper[4813]: I0129 16:51:47.685410 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30","Type":"ContainerStarted","Data":"39919ff14a16305b0948dc639a1edb5e8c15163b3a61dd9155f0671e69311fd5"} Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.913979 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.915436 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.928887 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hxqn8" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.929087 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.929211 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.948020 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.951009 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.952545 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2znxt\" (UniqueName: \"kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956865 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956884 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956911 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956938 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.956957 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:49 crc kubenswrapper[4813]: I0129 16:51:49.957097 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.063781 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064079 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l92bf\" (UniqueName: \"kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064128 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064155 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064232 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064254 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064293 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064308 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2znxt\" (UniqueName: \"kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064343 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064362 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.064783 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.069689 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.073669 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.074702 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.074967 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.080728 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.115337 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2znxt\" (UniqueName: \"kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt\") pod \"ovn-controller-cmsdz\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166734 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l92bf\" (UniqueName: \"kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166773 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.166902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.168348 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.168492 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.168812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.168881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.226761 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l92bf\" (UniqueName: \"kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf\") pod \"ovn-controller-ovs-xqdpz\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.249091 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz" Jan 29 16:51:50 crc kubenswrapper[4813]: I0129 16:51:50.274539 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.437422 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.439672 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.445577 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.445708 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.446433 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.446597 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6llxw" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.447810 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.457126 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494723 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494765 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494792 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494816 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494842 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.494875 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvdj\" (UniqueName: \"kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvdj\" (UniqueName: \"kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596514 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596628 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596659 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.596696 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.597163 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.597471 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.598171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.598302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.603614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.605740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.608333 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.619561 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.619889 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvdj\" (UniqueName: \"kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj\") pod \"ovsdbserver-nb-0\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:51 crc kubenswrapper[4813]: I0129 16:51:51.762954 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.395533 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.397616 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.399653 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.399790 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.400026 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-t4dv6" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.400173 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.414957 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs7pb\" (UniqueName: \"kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439591 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439635 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439879 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.439909 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541702 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541864 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs7pb\" (UniqueName: \"kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541881 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541914 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541933 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.541948 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.542089 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.542718 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.543062 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.543092 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.550179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.551688 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.552362 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.558337 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs7pb\" (UniqueName: \"kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.565322 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " pod="openstack/ovsdbserver-sb-0" Jan 29 16:51:54 crc kubenswrapper[4813]: I0129 16:51:54.719234 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.012777 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.013599 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5f9h94h5fbh647h5dh648h56bh546h687h66hcdh594h89h589h9h56dh6fh96hdch56h568h5c4h556h598h57h9fhdbh89h595h5f5h76h5c5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnglg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(6067c320-64f2-4c71-b4b0-bd136749200f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.016134 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.248836 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.249028 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxmsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(09a277e9-f3d7-4499-b29b-ef8788c5e1b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.251397 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.900382 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" Jan 29 16:52:11 crc kubenswrapper[4813]: E0129 16:52:11.900422 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.096058 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.096231 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4h6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(bda951f8-8354-4ca3-be9e-f92f6fea40cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.097857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.213988 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.214223 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-77wdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(6463fe6f-cd6d-4078-8fa2-0d167de480df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.215590 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.904805 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" Jan 29 16:52:12 crc kubenswrapper[4813]: E0129 16:52:12.905332 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" Jan 29 16:52:17 crc kubenswrapper[4813]: I0129 16:52:17.682904 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.130166 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.130314 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8v5tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-fhgxq_openstack(70e230f8-1ef0-4bf6-8a9a-42594f3df7bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.132220 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" podUID="70e230f8-1ef0-4bf6-8a9a-42594f3df7bb" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.137884 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.138041 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjk4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-pnjlg_openstack(72953957-83b4-4b8b-abd5-a8902227edc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.139258 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.206377 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.206367 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.206527 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w2v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-655nf_openstack(077036c4-8186-4a4e-b4ec-f9ba9b5532fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.206580 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2xx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-vfwqn_openstack(be56f14b-0402-44e8-b8b0-e3ee29a65184): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.207738 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" podUID="be56f14b-0402-44e8-b8b0-e3ee29a65184" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.207738 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.218857 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.218909 4813 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.219039 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q2fgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(a8b1a98d-6274-4af5-b861-b0c9d9dc0d30): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.220168 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" Jan 29 16:52:18 crc kubenswrapper[4813]: I0129 16:52:18.664219 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 16:52:18 crc kubenswrapper[4813]: I0129 16:52:18.950272 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerStarted","Data":"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244"} Jan 29 16:52:18 crc kubenswrapper[4813]: I0129 16:52:18.952367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz" event={"ID":"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b","Type":"ContainerStarted","Data":"e65b4b7935d5bd7e34c9faff8ad9f97580ad68a97b86ef16e9ab5b1b1fbffd05"} Jan 29 16:52:18 crc kubenswrapper[4813]: I0129 16:52:18.953513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerStarted","Data":"a6333d3ed43e3aa222e6d7bd7f69cc9ca548e928b7bce9769a46f86a8b0701a5"} Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.955365 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.955670 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" Jan 29 16:52:18 crc kubenswrapper[4813]: E0129 16:52:18.955872 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.264096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:52:19 crc kubenswrapper[4813]: W0129 16:52:19.271837 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a6aaa4_f80a_49fa_8236_967825494243.slice/crio-3d9c318e55fd04269a6f5c216ebd86978da0c51e68cde93e0d1bd6ddc6ef9e66 WatchSource:0}: Error finding container 3d9c318e55fd04269a6f5c216ebd86978da0c51e68cde93e0d1bd6ddc6ef9e66: Status 404 returned error can't find the container with id 3d9c318e55fd04269a6f5c216ebd86978da0c51e68cde93e0d1bd6ddc6ef9e66 Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.385316 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.392433 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.502575 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config" (OuterVolumeSpecName: "config") pod "be56f14b-0402-44e8-b8b0-e3ee29a65184" (UID: "be56f14b-0402-44e8-b8b0-e3ee29a65184"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.501945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config\") pod \"be56f14b-0402-44e8-b8b0-e3ee29a65184\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.502678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc\") pod \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.502721 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2xx2\" (UniqueName: \"kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2\") pod \"be56f14b-0402-44e8-b8b0-e3ee29a65184\" (UID: \"be56f14b-0402-44e8-b8b0-e3ee29a65184\") " Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.502794 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config\") pod \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.502902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v5tt\" (UniqueName: \"kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt\") pod \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\" (UID: \"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb\") " Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.503406 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be56f14b-0402-44e8-b8b0-e3ee29a65184-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.504213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb" (UID: "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.504583 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config" (OuterVolumeSpecName: "config") pod "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb" (UID: "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.511215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt" (OuterVolumeSpecName: "kube-api-access-8v5tt") pod "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb" (UID: "70e230f8-1ef0-4bf6-8a9a-42594f3df7bb"). InnerVolumeSpecName "kube-api-access-8v5tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.512026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2" (OuterVolumeSpecName: "kube-api-access-v2xx2") pod "be56f14b-0402-44e8-b8b0-e3ee29a65184" (UID: "be56f14b-0402-44e8-b8b0-e3ee29a65184"). InnerVolumeSpecName "kube-api-access-v2xx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.604516 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.604541 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2xx2\" (UniqueName: \"kubernetes.io/projected/be56f14b-0402-44e8-b8b0-e3ee29a65184-kube-api-access-v2xx2\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.604554 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.604564 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v5tt\" (UniqueName: \"kubernetes.io/projected/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb-kube-api-access-8v5tt\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.759967 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:52:19 crc kubenswrapper[4813]: W0129 16:52:19.767760 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b9c405_fbca_44fa_820a_1613a7df4c9c.slice/crio-fb2c14e56f8f52789c8f375257bf74152d33604924d36732bdc99cdd5d053f02 WatchSource:0}: Error finding container fb2c14e56f8f52789c8f375257bf74152d33604924d36732bdc99cdd5d053f02: Status 404 returned error can't find the container with id fb2c14e56f8f52789c8f375257bf74152d33604924d36732bdc99cdd5d053f02 Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.962913 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" event={"ID":"70e230f8-1ef0-4bf6-8a9a-42594f3df7bb","Type":"ContainerDied","Data":"0da109f93daa0b31c30eecff0ec65523649c2222fd2a7bd8a34cb4f8cc2a2ddb"} Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.962930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-fhgxq" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.967435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"57a6aaa4-f80a-49fa-8236-967825494243","Type":"ContainerStarted","Data":"3d9c318e55fd04269a6f5c216ebd86978da0c51e68cde93e0d1bd6ddc6ef9e66"} Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.968634 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" event={"ID":"be56f14b-0402-44e8-b8b0-e3ee29a65184","Type":"ContainerDied","Data":"97020889fc392fe01d559e67979e5faaaeea60eca372cb0da5e03e60d609f73f"} Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.968655 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-vfwqn" Jan 29 16:52:19 crc kubenswrapper[4813]: I0129 16:52:19.969763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerStarted","Data":"fb2c14e56f8f52789c8f375257bf74152d33604924d36732bdc99cdd5d053f02"} Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.030438 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.044344 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-fhgxq"] Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.059018 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.080028 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-vfwqn"] Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.256649 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e230f8-1ef0-4bf6-8a9a-42594f3df7bb" path="/var/lib/kubelet/pods/70e230f8-1ef0-4bf6-8a9a-42594f3df7bb/volumes" Jan 29 16:52:20 crc kubenswrapper[4813]: I0129 16:52:20.257280 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be56f14b-0402-44e8-b8b0-e3ee29a65184" path="/var/lib/kubelet/pods/be56f14b-0402-44e8-b8b0-e3ee29a65184/volumes" Jan 29 16:52:22 crc kubenswrapper[4813]: I0129 16:52:22.992437 4813 generic.go:334] "Generic (PLEG): container finished" podID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerID="575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244" exitCode=0 Jan 29 16:52:22 crc kubenswrapper[4813]: I0129 16:52:22.992574 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerDied","Data":"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.015836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerStarted","Data":"fd5b50e1141f53e3f4146715f3fa19672e6c3c63fb01fb8ae6321eeeb23d77a9"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.018802 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerStarted","Data":"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.061908 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"57a6aaa4-f80a-49fa-8236-967825494243","Type":"ContainerStarted","Data":"327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.074224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerStarted","Data":"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.076891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz" event={"ID":"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b","Type":"ContainerStarted","Data":"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806"} Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.077054 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cmsdz" Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.127853 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cmsdz" podStartSLOduration=29.809077179 podStartE2EDuration="36.127830719s" podCreationTimestamp="2026-01-29 16:51:49 +0000 UTC" firstStartedPulling="2026-01-29 16:52:18.106952766 +0000 UTC m=+1390.594155982" lastFinishedPulling="2026-01-29 16:52:24.425706296 +0000 UTC m=+1396.912909522" observedRunningTime="2026-01-29 16:52:25.120452791 +0000 UTC m=+1397.607656007" watchObservedRunningTime="2026-01-29 16:52:25.127830719 +0000 UTC m=+1397.615033945" Jan 29 16:52:25 crc kubenswrapper[4813]: I0129 16:52:25.144458 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.24262448 podStartE2EDuration="42.144438458s" podCreationTimestamp="2026-01-29 16:51:43 +0000 UTC" firstStartedPulling="2026-01-29 16:51:45.338748169 +0000 UTC m=+1357.825951385" lastFinishedPulling="2026-01-29 16:52:17.240562147 +0000 UTC m=+1389.727765363" observedRunningTime="2026-01-29 16:52:25.138545522 +0000 UTC m=+1397.625748738" watchObservedRunningTime="2026-01-29 16:52:25.144438458 +0000 UTC m=+1397.631641674" Jan 29 16:52:26 crc kubenswrapper[4813]: I0129 16:52:26.083656 4813 generic.go:334] "Generic (PLEG): container finished" podID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerID="fd5b50e1141f53e3f4146715f3fa19672e6c3c63fb01fb8ae6321eeeb23d77a9" exitCode=0 Jan 29 16:52:26 crc kubenswrapper[4813]: I0129 16:52:26.083778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerDied","Data":"fd5b50e1141f53e3f4146715f3fa19672e6c3c63fb01fb8ae6321eeeb23d77a9"} Jan 29 16:52:30 crc kubenswrapper[4813]: I0129 16:52:30.110647 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6067c320-64f2-4c71-b4b0-bd136749200f","Type":"ContainerStarted","Data":"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c"} Jan 29 16:52:30 crc kubenswrapper[4813]: I0129 16:52:30.111739 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 16:52:30 crc kubenswrapper[4813]: I0129 16:52:30.112752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerStarted","Data":"77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4"} Jan 29 16:52:30 crc kubenswrapper[4813]: I0129 16:52:30.114446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerStarted","Data":"8e8b76ed0fa013c6fec75384e05efa7f1f3c80354a95f50dea2ec3a295bf5a92"} Jan 29 16:52:30 crc kubenswrapper[4813]: I0129 16:52:30.131959 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.51069909 podStartE2EDuration="46.131938638s" podCreationTimestamp="2026-01-29 16:51:44 +0000 UTC" firstStartedPulling="2026-01-29 16:51:45.496522929 +0000 UTC m=+1357.983726145" lastFinishedPulling="2026-01-29 16:52:29.117762477 +0000 UTC m=+1401.604965693" observedRunningTime="2026-01-29 16:52:30.129277082 +0000 UTC m=+1402.616480318" watchObservedRunningTime="2026-01-29 16:52:30.131938638 +0000 UTC m=+1402.619141854" Jan 29 16:52:32 crc kubenswrapper[4813]: I0129 16:52:32.130687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerStarted","Data":"53ac6aa3a537c0b6fd153a13fe50bef12959cc535873b4b51c03e2ead8c60e6c"} Jan 29 16:52:34 crc kubenswrapper[4813]: I0129 16:52:34.757280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 16:52:34 crc kubenswrapper[4813]: I0129 16:52:34.757828 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 16:52:34 crc kubenswrapper[4813]: I0129 16:52:34.868747 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.153127 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.249012 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.724492 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.755926 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.761711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.779675 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.910605 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.910710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:36 crc kubenswrapper[4813]: I0129 16:52:36.910731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvb7\" (UniqueName: \"kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.011742 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.011787 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvb7\" (UniqueName: \"kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.011859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.012785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.013014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.030914 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvb7\" (UniqueName: \"kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7\") pod \"dnsmasq-dns-7f9f9f545f-x4tpd\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.081114 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.188090 4813 generic.go:334] "Generic (PLEG): container finished" podID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" containerID="316fb8562031ed74ced32e06fe1aa19f2539a7008ff820aef73bdea5a6c45398" exitCode=0 Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.188382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" event={"ID":"077036c4-8186-4a4e-b4ec-f9ba9b5532fe","Type":"ContainerDied","Data":"316fb8562031ed74ced32e06fe1aa19f2539a7008ff820aef73bdea5a6c45398"} Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.191967 4813 generic.go:334] "Generic (PLEG): container finished" podID="72953957-83b4-4b8b-abd5-a8902227edc7" containerID="01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b" exitCode=0 Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.192022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" event={"ID":"72953957-83b4-4b8b-abd5-a8902227edc7","Type":"ContainerDied","Data":"01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b"} Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.194983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerStarted","Data":"2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f"} Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.196064 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.196096 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.197873 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"57a6aaa4-f80a-49fa-8236-967825494243","Type":"ContainerStarted","Data":"93a587b04c51abd55fec3ccad7f3b0bfc20db316641f2585e79d269b2d6979d1"} Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.199839 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerStarted","Data":"a98d4fc4fffd6c7b6ad588f19d86d27cf2a531b07982798f2fbbced264241b68"} Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.238232 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xqdpz" podStartSLOduration=42.484201236 podStartE2EDuration="48.238209249s" podCreationTimestamp="2026-01-29 16:51:49 +0000 UTC" firstStartedPulling="2026-01-29 16:52:18.673614427 +0000 UTC m=+1391.160817643" lastFinishedPulling="2026-01-29 16:52:24.42762241 +0000 UTC m=+1396.914825656" observedRunningTime="2026-01-29 16:52:37.229327729 +0000 UTC m=+1409.716530965" watchObservedRunningTime="2026-01-29 16:52:37.238209249 +0000 UTC m=+1409.725412465" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.258575 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.387171796 podStartE2EDuration="47.258555604s" podCreationTimestamp="2026-01-29 16:51:50 +0000 UTC" firstStartedPulling="2026-01-29 16:52:19.27467506 +0000 UTC m=+1391.761878276" lastFinishedPulling="2026-01-29 16:52:36.146058868 +0000 UTC m=+1408.633262084" observedRunningTime="2026-01-29 16:52:37.25384022 +0000 UTC m=+1409.741043426" watchObservedRunningTime="2026-01-29 16:52:37.258555604 +0000 UTC m=+1409.745758820" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.279690 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.605215488 podStartE2EDuration="44.279672959s" podCreationTimestamp="2026-01-29 16:51:53 +0000 UTC" firstStartedPulling="2026-01-29 16:52:19.769780292 +0000 UTC m=+1392.256983498" lastFinishedPulling="2026-01-29 16:52:36.444237753 +0000 UTC m=+1408.931440969" observedRunningTime="2026-01-29 16:52:37.274242806 +0000 UTC m=+1409.761446032" watchObservedRunningTime="2026-01-29 16:52:37.279672959 +0000 UTC m=+1409.766876165" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.527521 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.619684 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc\") pod \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.620102 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w2v2\" (UniqueName: \"kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2\") pod \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.620161 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config\") pod \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\" (UID: \"077036c4-8186-4a4e-b4ec-f9ba9b5532fe\") " Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.624463 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2" (OuterVolumeSpecName: "kube-api-access-5w2v2") pod "077036c4-8186-4a4e-b4ec-f9ba9b5532fe" (UID: "077036c4-8186-4a4e-b4ec-f9ba9b5532fe"). InnerVolumeSpecName "kube-api-access-5w2v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.643619 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config" (OuterVolumeSpecName: "config") pod "077036c4-8186-4a4e-b4ec-f9ba9b5532fe" (UID: "077036c4-8186-4a4e-b4ec-f9ba9b5532fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.648614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "077036c4-8186-4a4e-b4ec-f9ba9b5532fe" (UID: "077036c4-8186-4a4e-b4ec-f9ba9b5532fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.722092 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.722164 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.722178 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w2v2\" (UniqueName: \"kubernetes.io/projected/077036c4-8186-4a4e-b4ec-f9ba9b5532fe-kube-api-access-5w2v2\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.872765 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:52:37 crc kubenswrapper[4813]: E0129 16:52:37.873070 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" containerName="init" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.873083 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" containerName="init" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.873275 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" containerName="init" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.880985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.885307 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-r2qq6" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.885508 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.885710 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.886565 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.893853 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.924923 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.924988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.925036 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.925061 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68n2w\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.925091 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.925136 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:37 crc kubenswrapper[4813]: I0129 16:52:37.942884 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.026958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.027040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.027099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.027152 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68n2w\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.027196 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.027229 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.028265 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.028299 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.028356 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:38.528333827 +0000 UTC m=+1411.015537043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.029534 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.029818 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.030366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.033919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.051519 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68n2w\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.054100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.210362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerStarted","Data":"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4"} Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.212275 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" event={"ID":"077036c4-8186-4a4e-b4ec-f9ba9b5532fe","Type":"ContainerDied","Data":"8f9fefabc57b3ea0e9f7d609e63dd8be02eb8e28fade2d59f8595521a3ed2caf"} Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.212342 4813 scope.go:117] "RemoveContainer" containerID="316fb8562031ed74ced32e06fe1aa19f2539a7008ff820aef73bdea5a6c45398" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.212527 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-655nf" Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.297889 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.303337 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-655nf"] Jan 29 16:52:38 crc kubenswrapper[4813]: W0129 16:52:38.363345 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c6b2950_89f5_4082_9c86_b035fa49caa9.slice/crio-ac495206f771db218387d81c616d5d75c8046f97d1335a4e079810fc0547f341 WatchSource:0}: Error finding container ac495206f771db218387d81c616d5d75c8046f97d1335a4e079810fc0547f341: Status 404 returned error can't find the container with id ac495206f771db218387d81c616d5d75c8046f97d1335a4e079810fc0547f341 Jan 29 16:52:38 crc kubenswrapper[4813]: I0129 16:52:38.538472 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.538660 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.538689 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:38 crc kubenswrapper[4813]: E0129 16:52:38.538741 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:39.538723641 +0000 UTC m=+1412.025926857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.221467 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" event={"ID":"72953957-83b4-4b8b-abd5-a8902227edc7","Type":"ContainerStarted","Data":"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677"} Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.221683 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.224326 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerID="a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee" exitCode=0 Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.224460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" event={"ID":"1c6b2950-89f5-4082-9c86-b035fa49caa9","Type":"ContainerDied","Data":"a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee"} Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.224497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" event={"ID":"1c6b2950-89f5-4082-9c86-b035fa49caa9","Type":"ContainerStarted","Data":"ac495206f771db218387d81c616d5d75c8046f97d1335a4e079810fc0547f341"} Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.228873 4813 generic.go:334] "Generic (PLEG): container finished" podID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerID="a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0" exitCode=0 Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.228957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerDied","Data":"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0"} Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.232206 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30","Type":"ContainerStarted","Data":"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f"} Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.232886 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.262498 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" podStartSLOduration=4.538225303 podStartE2EDuration="59.262476815s" podCreationTimestamp="2026-01-29 16:51:40 +0000 UTC" firstStartedPulling="2026-01-29 16:51:41.534328971 +0000 UTC m=+1354.021532187" lastFinishedPulling="2026-01-29 16:52:36.258580473 +0000 UTC m=+1408.745783699" observedRunningTime="2026-01-29 16:52:39.240271448 +0000 UTC m=+1411.727474664" watchObservedRunningTime="2026-01-29 16:52:39.262476815 +0000 UTC m=+1411.749680031" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.269068 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.363146723 podStartE2EDuration="53.269050501s" podCreationTimestamp="2026-01-29 16:51:46 +0000 UTC" firstStartedPulling="2026-01-29 16:51:47.577179473 +0000 UTC m=+1360.064382689" lastFinishedPulling="2026-01-29 16:52:38.483083261 +0000 UTC m=+1410.970286467" observedRunningTime="2026-01-29 16:52:39.262976489 +0000 UTC m=+1411.750179705" watchObservedRunningTime="2026-01-29 16:52:39.269050501 +0000 UTC m=+1411.756253717" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.552591 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:39 crc kubenswrapper[4813]: E0129 16:52:39.552762 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:39 crc kubenswrapper[4813]: E0129 16:52:39.552974 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:39 crc kubenswrapper[4813]: E0129 16:52:39.553032 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:41.553014254 +0000 UTC m=+1414.040217470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.720470 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.720518 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.763323 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.767722 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 16:52:39 crc kubenswrapper[4813]: I0129 16:52:39.803939 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.252909 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077036c4-8186-4a4e-b4ec-f9ba9b5532fe" path="/var/lib/kubelet/pods/077036c4-8186-4a4e-b4ec-f9ba9b5532fe/volumes" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.253423 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" event={"ID":"1c6b2950-89f5-4082-9c86-b035fa49caa9","Type":"ContainerStarted","Data":"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2"} Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.253446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerStarted","Data":"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f"} Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.253461 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.253471 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.270961 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" podStartSLOduration=4.270934265 podStartE2EDuration="4.270934265s" podCreationTimestamp="2026-01-29 16:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:40.262091005 +0000 UTC m=+1412.749294231" watchObservedRunningTime="2026-01-29 16:52:40.270934265 +0000 UTC m=+1412.758137501" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.288094 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371977.56671 podStartE2EDuration="59.288066918s" podCreationTimestamp="2026-01-29 16:51:41 +0000 UTC" firstStartedPulling="2026-01-29 16:51:43.86374586 +0000 UTC m=+1356.350949076" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:40.287538773 +0000 UTC m=+1412.774741999" watchObservedRunningTime="2026-01-29 16:52:40.288066918 +0000 UTC m=+1412.775270134" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.296473 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.297659 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.478797 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.508908 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.510387 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.512504 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.524222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.652639 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.653825 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.656390 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.681158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.681253 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.681355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.681463 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czgfd\" (UniqueName: \"kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.742735 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785482 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785547 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785601 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785624 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785651 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hdp\" (UniqueName: \"kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785678 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czgfd\" (UniqueName: \"kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785738 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.785770 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.788271 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.790796 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.791586 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.819884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czgfd\" (UniqueName: \"kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd\") pod \"dnsmasq-dns-7c554cfdf-64r4q\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.829477 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.831743 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.872662 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.884655 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.886920 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887249 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887312 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887372 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887413 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hdp\" (UniqueName: \"kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887460 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.887522 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.894884 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.895058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.895151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.895472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.896301 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.900515 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.944703 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hdp\" (UniqueName: \"kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp\") pod \"ovn-controller-metrics-qw2k9\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.961858 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.964280 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.970822 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.971189 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fd2cw" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.971342 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.973554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.974449 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.992469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.992530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.992618 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.992645 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:40 crc kubenswrapper[4813]: I0129 16:52:40.992686 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlbp7\" (UniqueName: \"kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.009829 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.099274 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.099830 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.099896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.099979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100088 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptkv\" (UniqueName: \"kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100203 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlbp7\" (UniqueName: \"kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100490 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100521 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.100543 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.101285 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.101815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.102470 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.103851 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.129924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlbp7\" (UniqueName: \"kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7\") pod \"dnsmasq-dns-67fdf7998c-ct7v4\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.201869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.201995 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202032 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202104 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202197 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptkv\" (UniqueName: \"kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.202439 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.203387 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.205254 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.205552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.206183 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.206188 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.221941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptkv\" (UniqueName: \"kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv\") pod \"ovn-northd-0\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.252800 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="dnsmasq-dns" containerID="cri-o://64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677" gracePeriod=10 Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.315319 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.365098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.421349 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:41 crc kubenswrapper[4813]: W0129 16:52:41.440367 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e3dfbcb_2a07_42c4_b345_a4f758bd78cc.slice/crio-2e2253bd0b18aa5f45186356762bfedd717cf7e520d43f32ddf6f649c75c4545 WatchSource:0}: Error finding container 2e2253bd0b18aa5f45186356762bfedd717cf7e520d43f32ddf6f649c75c4545: Status 404 returned error can't find the container with id 2e2253bd0b18aa5f45186356762bfedd717cf7e520d43f32ddf6f649c75c4545 Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.503630 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.614954 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:41 crc kubenswrapper[4813]: E0129 16:52:41.615607 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:41 crc kubenswrapper[4813]: E0129 16:52:41.615624 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:41 crc kubenswrapper[4813]: E0129 16:52:41.615675 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:45.615658184 +0000 UTC m=+1418.102861400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.737512 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.743368 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.817602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc\") pod \"72953957-83b4-4b8b-abd5-a8902227edc7\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.817691 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config\") pod \"72953957-83b4-4b8b-abd5-a8902227edc7\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.817777 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjk4s\" (UniqueName: \"kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s\") pod \"72953957-83b4-4b8b-abd5-a8902227edc7\" (UID: \"72953957-83b4-4b8b-abd5-a8902227edc7\") " Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.822524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s" (OuterVolumeSpecName: "kube-api-access-bjk4s") pod "72953957-83b4-4b8b-abd5-a8902227edc7" (UID: "72953957-83b4-4b8b-abd5-a8902227edc7"). InnerVolumeSpecName "kube-api-access-bjk4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.861734 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-fpmz6"] Jan 29 16:52:41 crc kubenswrapper[4813]: E0129 16:52:41.862129 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="dnsmasq-dns" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.862152 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="dnsmasq-dns" Jan 29 16:52:41 crc kubenswrapper[4813]: E0129 16:52:41.862189 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="init" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.862199 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="init" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.862384 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" containerName="dnsmasq-dns" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.872758 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.885415 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.885682 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.885808 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.890260 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "72953957-83b4-4b8b-abd5-a8902227edc7" (UID: "72953957-83b4-4b8b-abd5-a8902227edc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.891290 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fpmz6"] Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.915836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config" (OuterVolumeSpecName: "config") pod "72953957-83b4-4b8b-abd5-a8902227edc7" (UID: "72953957-83b4-4b8b-abd5-a8902227edc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.919651 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.919692 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72953957-83b4-4b8b-abd5-a8902227edc7-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.919706 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjk4s\" (UniqueName: \"kubernetes.io/projected/72953957-83b4-4b8b-abd5-a8902227edc7-kube-api-access-bjk4s\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:41 crc kubenswrapper[4813]: I0129 16:52:41.950318 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.020698 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tssdv\" (UniqueName: \"kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021033 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021163 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021187 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021219 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.021243 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.122724 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.123248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tssdv\" (UniqueName: \"kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.123382 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.123420 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.123480 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.123553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.124101 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.124222 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.124268 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.124227 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.128047 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.128615 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.129099 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.138592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tssdv\" (UniqueName: \"kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv\") pod \"swift-ring-rebalance-fpmz6\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.260995 4813 generic.go:334] "Generic (PLEG): container finished" podID="72953957-83b4-4b8b-abd5-a8902227edc7" containerID="64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677" exitCode=0 Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.261065 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" event={"ID":"72953957-83b4-4b8b-abd5-a8902227edc7","Type":"ContainerDied","Data":"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.261087 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.261124 4813 scope.go:117] "RemoveContainer" containerID="64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.261096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-pnjlg" event={"ID":"72953957-83b4-4b8b-abd5-a8902227edc7","Type":"ContainerDied","Data":"9b3828602681ca794358c4becccbc7e3e9905cc83111d8e603bffec67454c165"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.263227 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2k9" event={"ID":"e994ed3b-ff92-4997-b056-0c3b37fcebcf","Type":"ContainerStarted","Data":"9ed8ecb763d33646b3e8cc91f896a3b2de2c4f6990d7c4d2843989603afddfed"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.263259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2k9" event={"ID":"e994ed3b-ff92-4997-b056-0c3b37fcebcf","Type":"ContainerStarted","Data":"81d09bce7d548ab32871775a42d2aeb244c30940c5fdfe84195ccb8a5a512aba"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.264512 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerStarted","Data":"3a7bc22fb530e3f11ed2b8c1d4f78e9138dcdf8b0a7c5fe11ebca00bcceaa80e"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.266672 4813 generic.go:334] "Generic (PLEG): container finished" podID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerID="e5b3fbb3afcdba2ef3d6466a8a826cd3607db54b2d63062bc8f8f914902de3d8" exitCode=0 Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.266749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" event={"ID":"6c75534b-dabc-4df6-bd87-515d6ec3e73d","Type":"ContainerDied","Data":"e5b3fbb3afcdba2ef3d6466a8a826cd3607db54b2d63062bc8f8f914902de3d8"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.267194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" event={"ID":"6c75534b-dabc-4df6-bd87-515d6ec3e73d","Type":"ContainerStarted","Data":"d8e082cee304858bf6ce8324fdf6d52bb1b4c0b9bff33f5f6c5642e63994f950"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.271051 4813 generic.go:334] "Generic (PLEG): container finished" podID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerID="c56b6e2f84da0b9ef0ef7e721583186a09d0c94cbc6615ec460afc630b34db1d" exitCode=0 Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.271254 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" event={"ID":"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc","Type":"ContainerDied","Data":"c56b6e2f84da0b9ef0ef7e721583186a09d0c94cbc6615ec460afc630b34db1d"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.271307 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" event={"ID":"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc","Type":"ContainerStarted","Data":"2e2253bd0b18aa5f45186356762bfedd717cf7e520d43f32ddf6f649c75c4545"} Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.271304 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="dnsmasq-dns" containerID="cri-o://20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2" gracePeriod=10 Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.287181 4813 scope.go:117] "RemoveContainer" containerID="01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.289649 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qw2k9" podStartSLOduration=2.289632894 podStartE2EDuration="2.289632894s" podCreationTimestamp="2026-01-29 16:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:42.288182893 +0000 UTC m=+1414.775386129" watchObservedRunningTime="2026-01-29 16:52:42.289632894 +0000 UTC m=+1414.776836130" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.312671 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.317110 4813 scope.go:117] "RemoveContainer" containerID="64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677" Jan 29 16:52:42 crc kubenswrapper[4813]: E0129 16:52:42.318327 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677\": container with ID starting with 64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677 not found: ID does not exist" containerID="64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.318355 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677"} err="failed to get container status \"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677\": rpc error: code = NotFound desc = could not find container \"64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677\": container with ID starting with 64795ba1336f355e3639fa67fa56b2e66eff81628d07aac50ded48a885f16677 not found: ID does not exist" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.318375 4813 scope.go:117] "RemoveContainer" containerID="01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b" Jan 29 16:52:42 crc kubenswrapper[4813]: E0129 16:52:42.320714 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b\": container with ID starting with 01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b not found: ID does not exist" containerID="01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.320748 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b"} err="failed to get container status \"01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b\": rpc error: code = NotFound desc = could not find container \"01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b\": container with ID starting with 01f82b71588bc4ce6b00fcbbc9e821633dba8633590a67d69601cc122123869b not found: ID does not exist" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.346993 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.363954 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-pnjlg"] Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.714319 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.840602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc\") pod \"1c6b2950-89f5-4082-9c86-b035fa49caa9\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.841451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config\") pod \"1c6b2950-89f5-4082-9c86-b035fa49caa9\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.841568 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvb7\" (UniqueName: \"kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7\") pod \"1c6b2950-89f5-4082-9c86-b035fa49caa9\" (UID: \"1c6b2950-89f5-4082-9c86-b035fa49caa9\") " Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.847259 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7" (OuterVolumeSpecName: "kube-api-access-wtvb7") pod "1c6b2950-89f5-4082-9c86-b035fa49caa9" (UID: "1c6b2950-89f5-4082-9c86-b035fa49caa9"). InnerVolumeSpecName "kube-api-access-wtvb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.878836 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-fpmz6"] Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.892323 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c6b2950-89f5-4082-9c86-b035fa49caa9" (UID: "1c6b2950-89f5-4082-9c86-b035fa49caa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.898790 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config" (OuterVolumeSpecName: "config") pod "1c6b2950-89f5-4082-9c86-b035fa49caa9" (UID: "1c6b2950-89f5-4082-9c86-b035fa49caa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.945082 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvb7\" (UniqueName: \"kubernetes.io/projected/1c6b2950-89f5-4082-9c86-b035fa49caa9-kube-api-access-wtvb7\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.945141 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:42 crc kubenswrapper[4813]: I0129 16:52:42.945158 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c6b2950-89f5-4082-9c86-b035fa49caa9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.244152 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.244206 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.289766 4813 generic.go:334] "Generic (PLEG): container finished" podID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerID="20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2" exitCode=0 Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.289816 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" event={"ID":"1c6b2950-89f5-4082-9c86-b035fa49caa9","Type":"ContainerDied","Data":"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2"} Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.289841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" event={"ID":"1c6b2950-89f5-4082-9c86-b035fa49caa9","Type":"ContainerDied","Data":"ac495206f771db218387d81c616d5d75c8046f97d1335a4e079810fc0547f341"} Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.289857 4813 scope.go:117] "RemoveContainer" containerID="20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.289951 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9f9f545f-x4tpd" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.301807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" event={"ID":"6c75534b-dabc-4df6-bd87-515d6ec3e73d","Type":"ContainerStarted","Data":"657006dc517868a8d5f18265519f539855679f9f9dc88ebe6e1bed7196b535b3"} Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.301911 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.303546 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpmz6" event={"ID":"3acd2fcf-e5d3-43bc-b216-225edbc7114a","Type":"ContainerStarted","Data":"a234f58d409d4245dd88a807a23703b4333611542c6c38f766eea6a20caff659"} Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.310081 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" event={"ID":"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc","Type":"ContainerStarted","Data":"0e81bc7a7f166774dc09dfb6e1bdb639a9f93a5bc924dc5bcf95654bf156ff12"} Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.310906 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.328925 4813 scope.go:117] "RemoveContainer" containerID="a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.333611 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" podStartSLOduration=3.333586664 podStartE2EDuration="3.333586664s" podCreationTimestamp="2026-01-29 16:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:43.321530784 +0000 UTC m=+1415.808734010" watchObservedRunningTime="2026-01-29 16:52:43.333586664 +0000 UTC m=+1415.820789880" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.350476 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" podStartSLOduration=3.350456 podStartE2EDuration="3.350456s" podCreationTimestamp="2026-01-29 16:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:43.338527254 +0000 UTC m=+1415.825730470" watchObservedRunningTime="2026-01-29 16:52:43.350456 +0000 UTC m=+1415.837659216" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.369232 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.376802 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9f9f545f-x4tpd"] Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.402303 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-47sht"] Jan 29 16:52:43 crc kubenswrapper[4813]: E0129 16:52:43.402807 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="dnsmasq-dns" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.402838 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="dnsmasq-dns" Jan 29 16:52:43 crc kubenswrapper[4813]: E0129 16:52:43.402876 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="init" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.402890 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="init" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.403244 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" containerName="dnsmasq-dns" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.404049 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.406067 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.411198 4813 scope.go:117] "RemoveContainer" containerID="20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2" Jan 29 16:52:43 crc kubenswrapper[4813]: E0129 16:52:43.412022 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2\": container with ID starting with 20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2 not found: ID does not exist" containerID="20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.412065 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2"} err="failed to get container status \"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2\": rpc error: code = NotFound desc = could not find container \"20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2\": container with ID starting with 20057ee42424ee6acfeff65e16a7288fba32a114762f9f1bfaa0bdaf5647a2e2 not found: ID does not exist" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.412092 4813 scope.go:117] "RemoveContainer" containerID="a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.413007 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47sht"] Jan 29 16:52:43 crc kubenswrapper[4813]: E0129 16:52:43.413336 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee\": container with ID starting with a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee not found: ID does not exist" containerID="a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.413389 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee"} err="failed to get container status \"a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee\": rpc error: code = NotFound desc = could not find container \"a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee\": container with ID starting with a5019a59643b03730dac168f504e4903744ed5056c00224e85eb7927ddf761ee not found: ID does not exist" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.452895 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.452949 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wlv\" (UniqueName: \"kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.554311 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.554379 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7wlv\" (UniqueName: \"kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.555517 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.576100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7wlv\" (UniqueName: \"kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv\") pod \"root-account-create-update-47sht\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " pod="openstack/root-account-create-update-47sht" Jan 29 16:52:43 crc kubenswrapper[4813]: I0129 16:52:43.724166 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sht" Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.266350 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6b2950-89f5-4082-9c86-b035fa49caa9" path="/var/lib/kubelet/pods/1c6b2950-89f5-4082-9c86-b035fa49caa9/volumes" Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.268183 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72953957-83b4-4b8b-abd5-a8902227edc7" path="/var/lib/kubelet/pods/72953957-83b4-4b8b-abd5-a8902227edc7/volumes" Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.268849 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-47sht"] Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.326644 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerStarted","Data":"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64"} Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.328836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerStarted","Data":"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c"} Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.328857 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.333400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sht" event={"ID":"d8a1744f-57a3-40d0-aa50-3552b616df70","Type":"ContainerStarted","Data":"9785e588e428a937e8e803848a218c512d8b7db7ba2ed71a8008fca6e2516306"} Jan 29 16:52:44 crc kubenswrapper[4813]: I0129 16:52:44.356626 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.895877423 podStartE2EDuration="4.356610045s" podCreationTimestamp="2026-01-29 16:52:40 +0000 UTC" firstStartedPulling="2026-01-29 16:52:41.960937538 +0000 UTC m=+1414.448140754" lastFinishedPulling="2026-01-29 16:52:43.42167016 +0000 UTC m=+1415.908873376" observedRunningTime="2026-01-29 16:52:44.353570129 +0000 UTC m=+1416.840773355" watchObservedRunningTime="2026-01-29 16:52:44.356610045 +0000 UTC m=+1416.843813261" Jan 29 16:52:45 crc kubenswrapper[4813]: I0129 16:52:45.342163 4813 generic.go:334] "Generic (PLEG): container finished" podID="d8a1744f-57a3-40d0-aa50-3552b616df70" containerID="ee0460e20b36b46ac840fc93a263e55239f303f9abfa21e1bb30ac7f1e3db8c9" exitCode=0 Jan 29 16:52:45 crc kubenswrapper[4813]: I0129 16:52:45.342237 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sht" event={"ID":"d8a1744f-57a3-40d0-aa50-3552b616df70","Type":"ContainerDied","Data":"ee0460e20b36b46ac840fc93a263e55239f303f9abfa21e1bb30ac7f1e3db8c9"} Jan 29 16:52:45 crc kubenswrapper[4813]: I0129 16:52:45.694800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:45 crc kubenswrapper[4813]: E0129 16:52:45.694978 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:45 crc kubenswrapper[4813]: E0129 16:52:45.694998 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:45 crc kubenswrapper[4813]: E0129 16:52:45.695059 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:52:53.695039007 +0000 UTC m=+1426.182242223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:46 crc kubenswrapper[4813]: I0129 16:52:46.800971 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:52:47 crc kubenswrapper[4813]: I0129 16:52:47.318040 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 16:52:47 crc kubenswrapper[4813]: I0129 16:52:47.431743 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.373160 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-47sht" event={"ID":"d8a1744f-57a3-40d0-aa50-3552b616df70","Type":"ContainerDied","Data":"9785e588e428a937e8e803848a218c512d8b7db7ba2ed71a8008fca6e2516306"} Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.373200 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9785e588e428a937e8e803848a218c512d8b7db7ba2ed71a8008fca6e2516306" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.421713 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sht" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.596155 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7wlv\" (UniqueName: \"kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv\") pod \"d8a1744f-57a3-40d0-aa50-3552b616df70\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.596203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts\") pod \"d8a1744f-57a3-40d0-aa50-3552b616df70\" (UID: \"d8a1744f-57a3-40d0-aa50-3552b616df70\") " Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.597066 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8a1744f-57a3-40d0-aa50-3552b616df70" (UID: "d8a1744f-57a3-40d0-aa50-3552b616df70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.600521 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv" (OuterVolumeSpecName: "kube-api-access-x7wlv") pod "d8a1744f-57a3-40d0-aa50-3552b616df70" (UID: "d8a1744f-57a3-40d0-aa50-3552b616df70"). InnerVolumeSpecName "kube-api-access-x7wlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.699539 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7wlv\" (UniqueName: \"kubernetes.io/projected/d8a1744f-57a3-40d0-aa50-3552b616df70-kube-api-access-x7wlv\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:48 crc kubenswrapper[4813]: I0129 16:52:48.700212 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a1744f-57a3-40d0-aa50-3552b616df70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:49 crc kubenswrapper[4813]: I0129 16:52:49.382456 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-47sht" Jan 29 16:52:49 crc kubenswrapper[4813]: I0129 16:52:49.388423 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpmz6" event={"ID":"3acd2fcf-e5d3-43bc-b216-225edbc7114a","Type":"ContainerStarted","Data":"5f48223a92a9444cb85942e403ab8901d1e45d0547889895defb0452c6c9b52a"} Jan 29 16:52:49 crc kubenswrapper[4813]: I0129 16:52:49.421920 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-fpmz6" podStartSLOduration=2.843292609 podStartE2EDuration="8.421898381s" podCreationTimestamp="2026-01-29 16:52:41 +0000 UTC" firstStartedPulling="2026-01-29 16:52:42.892307931 +0000 UTC m=+1415.379511147" lastFinishedPulling="2026-01-29 16:52:48.470913703 +0000 UTC m=+1420.958116919" observedRunningTime="2026-01-29 16:52:49.406379883 +0000 UTC m=+1421.893583099" watchObservedRunningTime="2026-01-29 16:52:49.421898381 +0000 UTC m=+1421.909101617" Jan 29 16:52:50 crc kubenswrapper[4813]: I0129 16:52:50.840490 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.318360 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.413344 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.413859 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="dnsmasq-dns" containerID="cri-o://0e81bc7a7f166774dc09dfb6e1bdb639a9f93a5bc924dc5bcf95654bf156ff12" gracePeriod=10 Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.873204 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-47sht"] Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.878193 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-47sht"] Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.922524 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pm82f"] Jan 29 16:52:51 crc kubenswrapper[4813]: E0129 16:52:51.922956 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a1744f-57a3-40d0-aa50-3552b616df70" containerName="mariadb-account-create-update" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.922982 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a1744f-57a3-40d0-aa50-3552b616df70" containerName="mariadb-account-create-update" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.923194 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a1744f-57a3-40d0-aa50-3552b616df70" containerName="mariadb-account-create-update" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.923854 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.926774 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 16:52:51 crc kubenswrapper[4813]: I0129 16:52:51.939884 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pm82f"] Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.067257 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnhv\" (UniqueName: \"kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.067347 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.169528 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.169792 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnhv\" (UniqueName: \"kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.182530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.201204 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnhv\" (UniqueName: \"kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv\") pod \"root-account-create-update-pm82f\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.247997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.252906 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a1744f-57a3-40d0-aa50-3552b616df70" path="/var/lib/kubelet/pods/d8a1744f-57a3-40d0-aa50-3552b616df70/volumes" Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.437562 4813 generic.go:334] "Generic (PLEG): container finished" podID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerID="0e81bc7a7f166774dc09dfb6e1bdb639a9f93a5bc924dc5bcf95654bf156ff12" exitCode=0 Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.437608 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" event={"ID":"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc","Type":"ContainerDied","Data":"0e81bc7a7f166774dc09dfb6e1bdb639a9f93a5bc924dc5bcf95654bf156ff12"} Jan 29 16:52:52 crc kubenswrapper[4813]: I0129 16:52:52.630143 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pm82f"] Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.028170 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.098729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config\") pod \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.098841 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czgfd\" (UniqueName: \"kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd\") pod \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.098901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc\") pod \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.098975 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb\") pod \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\" (UID: \"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc\") " Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.109363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd" (OuterVolumeSpecName: "kube-api-access-czgfd") pod "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" (UID: "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc"). InnerVolumeSpecName "kube-api-access-czgfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.145358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" (UID: "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.145380 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config" (OuterVolumeSpecName: "config") pod "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" (UID: "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.156586 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" (UID: "2e3dfbcb-2a07-42c4-b345-a4f758bd78cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.202352 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.202398 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.202411 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czgfd\" (UniqueName: \"kubernetes.io/projected/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-kube-api-access-czgfd\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.202422 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.448399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" event={"ID":"2e3dfbcb-2a07-42c4-b345-a4f758bd78cc","Type":"ContainerDied","Data":"2e2253bd0b18aa5f45186356762bfedd717cf7e520d43f32ddf6f649c75c4545"} Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.448489 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c554cfdf-64r4q" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.448505 4813 scope.go:117] "RemoveContainer" containerID="0e81bc7a7f166774dc09dfb6e1bdb639a9f93a5bc924dc5bcf95654bf156ff12" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.451642 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" containerID="6cbf30d9a6f8e6434b88c8267402579aa71062196797b6ff8c2ef6920d946613" exitCode=0 Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.451698 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82f" event={"ID":"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4","Type":"ContainerDied","Data":"6cbf30d9a6f8e6434b88c8267402579aa71062196797b6ff8c2ef6920d946613"} Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.451757 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82f" event={"ID":"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4","Type":"ContainerStarted","Data":"f4e117fef1bd7fb7c2277dd6ffef51b4cf7d99c6021bc5c00d0d31cd88545341"} Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.473271 4813 scope.go:117] "RemoveContainer" containerID="c56b6e2f84da0b9ef0ef7e721583186a09d0c94cbc6615ec460afc630b34db1d" Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.493705 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.501091 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c554cfdf-64r4q"] Jan 29 16:52:53 crc kubenswrapper[4813]: I0129 16:52:53.710869 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:52:53 crc kubenswrapper[4813]: E0129 16:52:53.711046 4813 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 16:52:53 crc kubenswrapper[4813]: E0129 16:52:53.711061 4813 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 16:52:53 crc kubenswrapper[4813]: E0129 16:52:53.711104 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift podName:f2aa8580-8c90-4607-a906-c039e1e4c111 nodeName:}" failed. No retries permitted until 2026-01-29 16:53:09.711089174 +0000 UTC m=+1442.198292390 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift") pod "swift-storage-0" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111") : configmap "swift-ring-files" not found Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.258211 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" path="/var/lib/kubelet/pods/2e3dfbcb-2a07-42c4-b345-a4f758bd78cc/volumes" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.487161 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vdkq7"] Jan 29 16:52:54 crc kubenswrapper[4813]: E0129 16:52:54.487833 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="dnsmasq-dns" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.487857 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="dnsmasq-dns" Jan 29 16:52:54 crc kubenswrapper[4813]: E0129 16:52:54.487882 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="init" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.487890 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="init" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.488103 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3dfbcb-2a07-42c4-b345-a4f758bd78cc" containerName="dnsmasq-dns" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.488694 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vdkq7"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.488786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.537183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.537371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxmj\" (UniqueName: \"kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.564372 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-94cf-account-create-update-d82t2"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.565429 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.571888 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.585170 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-94cf-account-create-update-d82t2"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.638779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.638824 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4wk\" (UniqueName: \"kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.638861 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxmj\" (UniqueName: \"kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.639052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.640079 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.657252 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxmj\" (UniqueName: \"kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj\") pod \"keystone-db-create-vdkq7\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.749565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.756193 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.757140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4wk\" (UniqueName: \"kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.771379 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vlfk2"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.772593 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.778223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4wk\" (UniqueName: \"kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk\") pod \"keystone-94cf-account-create-update-d82t2\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.794486 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vlfk2"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.825763 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.859942 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpbr\" (UniqueName: \"kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.860011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.871919 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9248-account-create-update-cpnmh"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.873245 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.875187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.883439 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.896364 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9248-account-create-update-cpnmh"] Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.943961 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.961334 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.965039 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.965340 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.966011 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg2d\" (UniqueName: \"kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.966719 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpbr\" (UniqueName: \"kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:54 crc kubenswrapper[4813]: I0129 16:52:54.986878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpbr\" (UniqueName: \"kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr\") pod \"placement-db-create-vlfk2\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.068134 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnhv\" (UniqueName: \"kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv\") pod \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.068192 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts\") pod \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\" (UID: \"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4\") " Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.068524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.068572 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg2d\" (UniqueName: \"kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.070419 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.070714 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" (UID: "ec85c172-6e2f-48f5-939a-fcd4fd5c93b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.072665 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv" (OuterVolumeSpecName: "kube-api-access-hmnhv") pod "ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" (UID: "ec85c172-6e2f-48f5-939a-fcd4fd5c93b4"). InnerVolumeSpecName "kube-api-access-hmnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.074073 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jxznp"] Jan 29 16:52:55 crc kubenswrapper[4813]: E0129 16:52:55.074539 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" containerName="mariadb-account-create-update" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.074565 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" containerName="mariadb-account-create-update" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.074764 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" containerName="mariadb-account-create-update" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.075432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.084864 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jxznp"] Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.085920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg2d\" (UniqueName: \"kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d\") pod \"placement-9248-account-create-update-cpnmh\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.170147 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br65\" (UniqueName: \"kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.170214 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.170475 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmnhv\" (UniqueName: \"kubernetes.io/projected/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-kube-api-access-hmnhv\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.170508 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.170558 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cf77-account-create-update-w2kw6"] Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.171759 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.178740 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.182332 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cf77-account-create-update-w2kw6"] Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.239553 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.263415 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.271520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8shj\" (UniqueName: \"kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.271573 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7br65\" (UniqueName: \"kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.271690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.271805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.272730 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.287505 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:52:55 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:52:55 crc kubenswrapper[4813]: > Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.288857 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7br65\" (UniqueName: \"kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65\") pod \"glance-db-create-jxznp\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.319968 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vdkq7"] Jan 29 16:52:55 crc kubenswrapper[4813]: W0129 16:52:55.326096 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda667f87f_f7c9_4f33_8e88_db86259f3111.slice/crio-9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666 WatchSource:0}: Error finding container 9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666: Status 404 returned error can't find the container with id 9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666 Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.373563 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8shj\" (UniqueName: \"kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.373626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.374383 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.393435 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-94cf-account-create-update-d82t2"] Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.396400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8shj\" (UniqueName: \"kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj\") pod \"glance-cf77-account-create-update-w2kw6\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.398149 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jxznp" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.486500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vdkq7" event={"ID":"a667f87f-f7c9-4f33-8e88-db86259f3111","Type":"ContainerStarted","Data":"9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666"} Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.487478 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-94cf-account-create-update-d82t2" event={"ID":"69ab991e-8fc7-4320-8627-0a1020527696","Type":"ContainerStarted","Data":"558e443e5f182248608141ef91efb18c8c34cfa1e56f2ed9883dd7fc0d902d7b"} Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.488973 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pm82f" event={"ID":"ec85c172-6e2f-48f5-939a-fcd4fd5c93b4","Type":"ContainerDied","Data":"f4e117fef1bd7fb7c2277dd6ffef51b4cf7d99c6021bc5c00d0d31cd88545341"} Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.489027 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4e117fef1bd7fb7c2277dd6ffef51b4cf7d99c6021bc5c00d0d31cd88545341" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.489100 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pm82f" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.499544 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.696654 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vlfk2"] Jan 29 16:52:55 crc kubenswrapper[4813]: W0129 16:52:55.704503 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff68719_5a69_4407_80c9_130f7a261c04.slice/crio-4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173 WatchSource:0}: Error finding container 4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173: Status 404 returned error can't find the container with id 4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173 Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.813634 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9248-account-create-update-cpnmh"] Jan 29 16:52:55 crc kubenswrapper[4813]: W0129 16:52:55.901085 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4112ac2a_6403_43a2_81a9_089c9fce1e1c.slice/crio-63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197 WatchSource:0}: Error finding container 63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197: Status 404 returned error can't find the container with id 63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197 Jan 29 16:52:55 crc kubenswrapper[4813]: I0129 16:52:55.916226 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jxznp"] Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.004371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cf77-account-create-update-w2kw6"] Jan 29 16:52:56 crc kubenswrapper[4813]: W0129 16:52:56.017356 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45176ce9_7d7d_4342_b14f_b4dbf8628b37.slice/crio-86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e WatchSource:0}: Error finding container 86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e: Status 404 returned error can't find the container with id 86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.497686 4813 generic.go:334] "Generic (PLEG): container finished" podID="2910884b-4b6f-4001-b7a6-cb47ad2b739b" containerID="be9b741f9e8bf46178022b96a89890a0e0e7c7be350924b1f642c3e821364e71" exitCode=0 Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.497791 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jxznp" event={"ID":"2910884b-4b6f-4001-b7a6-cb47ad2b739b","Type":"ContainerDied","Data":"be9b741f9e8bf46178022b96a89890a0e0e7c7be350924b1f642c3e821364e71"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.497845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jxznp" event={"ID":"2910884b-4b6f-4001-b7a6-cb47ad2b739b","Type":"ContainerStarted","Data":"ab98f072c39a38cbebb7ca4e9ba8a3f74f27714d62cc53ac73b4296ab75f64d9"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.500306 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-w2kw6" event={"ID":"45176ce9-7d7d-4342-b14f-b4dbf8628b37","Type":"ContainerStarted","Data":"3bf3df4b07dd7f86db3964c8dad4a78e0465b4368f0ee9fa8400066a2ebe7ae3"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.500341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-w2kw6" event={"ID":"45176ce9-7d7d-4342-b14f-b4dbf8628b37","Type":"ContainerStarted","Data":"86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.502145 4813 generic.go:334] "Generic (PLEG): container finished" podID="69ab991e-8fc7-4320-8627-0a1020527696" containerID="2e961f3738f83dea177de8c765c5c2b24916e1ed20046897298f11c87664b7e8" exitCode=0 Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.502201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-94cf-account-create-update-d82t2" event={"ID":"69ab991e-8fc7-4320-8627-0a1020527696","Type":"ContainerDied","Data":"2e961f3738f83dea177de8c765c5c2b24916e1ed20046897298f11c87664b7e8"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.505821 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-cpnmh" event={"ID":"4112ac2a-6403-43a2-81a9-089c9fce1e1c","Type":"ContainerStarted","Data":"ce5c32a667986b42c5850eeeb60cd5f2b431a0fb460affe31e4bed46bb406b73"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.505847 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-cpnmh" event={"ID":"4112ac2a-6403-43a2-81a9-089c9fce1e1c","Type":"ContainerStarted","Data":"63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.507844 4813 generic.go:334] "Generic (PLEG): container finished" podID="0ff68719-5a69-4407-80c9-130f7a261c04" containerID="c3d111a7d82308b87a5d3cd182c345565bf2ef14847685e0c1e62de5700f0e87" exitCode=0 Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.507875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vlfk2" event={"ID":"0ff68719-5a69-4407-80c9-130f7a261c04","Type":"ContainerDied","Data":"c3d111a7d82308b87a5d3cd182c345565bf2ef14847685e0c1e62de5700f0e87"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.507894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vlfk2" event={"ID":"0ff68719-5a69-4407-80c9-130f7a261c04","Type":"ContainerStarted","Data":"4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.515151 4813 generic.go:334] "Generic (PLEG): container finished" podID="a667f87f-f7c9-4f33-8e88-db86259f3111" containerID="c5c687f8aab50f60e671310103be4e09a991eac4e72d7b46abbda47a928b52c0" exitCode=0 Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.515194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vdkq7" event={"ID":"a667f87f-f7c9-4f33-8e88-db86259f3111","Type":"ContainerDied","Data":"c5c687f8aab50f60e671310103be4e09a991eac4e72d7b46abbda47a928b52c0"} Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.538013 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-cf77-account-create-update-w2kw6" podStartSLOduration=1.537992901 podStartE2EDuration="1.537992901s" podCreationTimestamp="2026-01-29 16:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:56.529414639 +0000 UTC m=+1429.016617855" watchObservedRunningTime="2026-01-29 16:52:56.537992901 +0000 UTC m=+1429.025196117" Jan 29 16:52:56 crc kubenswrapper[4813]: I0129 16:52:56.575316 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9248-account-create-update-cpnmh" podStartSLOduration=2.5752992040000002 podStartE2EDuration="2.575299204s" podCreationTimestamp="2026-01-29 16:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:52:56.571074835 +0000 UTC m=+1429.058278071" watchObservedRunningTime="2026-01-29 16:52:56.575299204 +0000 UTC m=+1429.062502420" Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.524063 4813 generic.go:334] "Generic (PLEG): container finished" podID="3acd2fcf-e5d3-43bc-b216-225edbc7114a" containerID="5f48223a92a9444cb85942e403ab8901d1e45d0547889895defb0452c6c9b52a" exitCode=0 Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.524164 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpmz6" event={"ID":"3acd2fcf-e5d3-43bc-b216-225edbc7114a","Type":"ContainerDied","Data":"5f48223a92a9444cb85942e403ab8901d1e45d0547889895defb0452c6c9b52a"} Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.526792 4813 generic.go:334] "Generic (PLEG): container finished" podID="45176ce9-7d7d-4342-b14f-b4dbf8628b37" containerID="3bf3df4b07dd7f86db3964c8dad4a78e0465b4368f0ee9fa8400066a2ebe7ae3" exitCode=0 Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.527007 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-w2kw6" event={"ID":"45176ce9-7d7d-4342-b14f-b4dbf8628b37","Type":"ContainerDied","Data":"3bf3df4b07dd7f86db3964c8dad4a78e0465b4368f0ee9fa8400066a2ebe7ae3"} Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.529526 4813 generic.go:334] "Generic (PLEG): container finished" podID="4112ac2a-6403-43a2-81a9-089c9fce1e1c" containerID="ce5c32a667986b42c5850eeeb60cd5f2b431a0fb460affe31e4bed46bb406b73" exitCode=0 Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.529812 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-cpnmh" event={"ID":"4112ac2a-6403-43a2-81a9-089c9fce1e1c","Type":"ContainerDied","Data":"ce5c32a667986b42c5850eeeb60cd5f2b431a0fb460affe31e4bed46bb406b73"} Jan 29 16:52:57 crc kubenswrapper[4813]: I0129 16:52:57.948100 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.021559 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxmj\" (UniqueName: \"kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj\") pod \"a667f87f-f7c9-4f33-8e88-db86259f3111\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.021941 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts\") pod \"a667f87f-f7c9-4f33-8e88-db86259f3111\" (UID: \"a667f87f-f7c9-4f33-8e88-db86259f3111\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.023151 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a667f87f-f7c9-4f33-8e88-db86259f3111" (UID: "a667f87f-f7c9-4f33-8e88-db86259f3111"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.027399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj" (OuterVolumeSpecName: "kube-api-access-smxmj") pod "a667f87f-f7c9-4f33-8e88-db86259f3111" (UID: "a667f87f-f7c9-4f33-8e88-db86259f3111"). InnerVolumeSpecName "kube-api-access-smxmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.071877 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.078185 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.082558 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jxznp" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.126533 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts\") pod \"0ff68719-5a69-4407-80c9-130f7a261c04\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.126954 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4wk\" (UniqueName: \"kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk\") pod \"69ab991e-8fc7-4320-8627-0a1020527696\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.127325 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxpbr\" (UniqueName: \"kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr\") pod \"0ff68719-5a69-4407-80c9-130f7a261c04\" (UID: \"0ff68719-5a69-4407-80c9-130f7a261c04\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.127484 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts\") pod \"69ab991e-8fc7-4320-8627-0a1020527696\" (UID: \"69ab991e-8fc7-4320-8627-0a1020527696\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.130612 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxmj\" (UniqueName: \"kubernetes.io/projected/a667f87f-f7c9-4f33-8e88-db86259f3111-kube-api-access-smxmj\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.130777 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a667f87f-f7c9-4f33-8e88-db86259f3111-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.130925 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69ab991e-8fc7-4320-8627-0a1020527696" (UID: "69ab991e-8fc7-4320-8627-0a1020527696"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.132647 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ff68719-5a69-4407-80c9-130f7a261c04" (UID: "0ff68719-5a69-4407-80c9-130f7a261c04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.134450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk" (OuterVolumeSpecName: "kube-api-access-6d4wk") pod "69ab991e-8fc7-4320-8627-0a1020527696" (UID: "69ab991e-8fc7-4320-8627-0a1020527696"). InnerVolumeSpecName "kube-api-access-6d4wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.135183 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr" (OuterVolumeSpecName: "kube-api-access-hxpbr") pod "0ff68719-5a69-4407-80c9-130f7a261c04" (UID: "0ff68719-5a69-4407-80c9-130f7a261c04"). InnerVolumeSpecName "kube-api-access-hxpbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.231610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7br65\" (UniqueName: \"kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65\") pod \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.231733 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts\") pod \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\" (UID: \"2910884b-4b6f-4001-b7a6-cb47ad2b739b\") " Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.232207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2910884b-4b6f-4001-b7a6-cb47ad2b739b" (UID: "2910884b-4b6f-4001-b7a6-cb47ad2b739b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.232236 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxpbr\" (UniqueName: \"kubernetes.io/projected/0ff68719-5a69-4407-80c9-130f7a261c04-kube-api-access-hxpbr\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.232255 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ab991e-8fc7-4320-8627-0a1020527696-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.232267 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ff68719-5a69-4407-80c9-130f7a261c04-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.232277 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d4wk\" (UniqueName: \"kubernetes.io/projected/69ab991e-8fc7-4320-8627-0a1020527696-kube-api-access-6d4wk\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.234401 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65" (OuterVolumeSpecName: "kube-api-access-7br65") pod "2910884b-4b6f-4001-b7a6-cb47ad2b739b" (UID: "2910884b-4b6f-4001-b7a6-cb47ad2b739b"). InnerVolumeSpecName "kube-api-access-7br65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.333589 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7br65\" (UniqueName: \"kubernetes.io/projected/2910884b-4b6f-4001-b7a6-cb47ad2b739b-kube-api-access-7br65\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.333632 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2910884b-4b6f-4001-b7a6-cb47ad2b739b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.422314 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pm82f"] Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.428157 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pm82f"] Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.539415 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vdkq7" event={"ID":"a667f87f-f7c9-4f33-8e88-db86259f3111","Type":"ContainerDied","Data":"9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666"} Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.539549 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9403fff662dfda9ab7d0750b0872356bb8fbc588aed9a5111239a11a4c943666" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.539443 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vdkq7" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.540879 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jxznp" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.540853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jxznp" event={"ID":"2910884b-4b6f-4001-b7a6-cb47ad2b739b","Type":"ContainerDied","Data":"ab98f072c39a38cbebb7ca4e9ba8a3f74f27714d62cc53ac73b4296ab75f64d9"} Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.540998 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab98f072c39a38cbebb7ca4e9ba8a3f74f27714d62cc53ac73b4296ab75f64d9" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.542534 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-d82t2" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.542517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-94cf-account-create-update-d82t2" event={"ID":"69ab991e-8fc7-4320-8627-0a1020527696","Type":"ContainerDied","Data":"558e443e5f182248608141ef91efb18c8c34cfa1e56f2ed9883dd7fc0d902d7b"} Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.542632 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558e443e5f182248608141ef91efb18c8c34cfa1e56f2ed9883dd7fc0d902d7b" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.544361 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vlfk2" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.544357 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vlfk2" event={"ID":"0ff68719-5a69-4407-80c9-130f7a261c04","Type":"ContainerDied","Data":"4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173"} Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.544408 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3ffbadaee78007838e04d0a2f200c0be2b00e600233e0d9df6ff2f810bb173" Jan 29 16:52:58 crc kubenswrapper[4813]: I0129 16:52:58.960379 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.044929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045047 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tssdv\" (UniqueName: \"kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045143 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045166 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045220 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.045250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift\") pod \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\" (UID: \"3acd2fcf-e5d3-43bc-b216-225edbc7114a\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.046153 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.046543 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.050148 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv" (OuterVolumeSpecName: "kube-api-access-tssdv") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "kube-api-access-tssdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.051594 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.066170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.071426 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts" (OuterVolumeSpecName: "scripts") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.071918 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3acd2fcf-e5d3-43bc-b216-225edbc7114a" (UID: "3acd2fcf-e5d3-43bc-b216-225edbc7114a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.089072 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.096438 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.148927 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts\") pod \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149121 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg2d\" (UniqueName: \"kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d\") pod \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\" (UID: \"4112ac2a-6403-43a2-81a9-089c9fce1e1c\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8shj\" (UniqueName: \"kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj\") pod \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149323 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts\") pod \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\" (UID: \"45176ce9-7d7d-4342-b14f-b4dbf8628b37\") " Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149433 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4112ac2a-6403-43a2-81a9-089c9fce1e1c" (UID: "4112ac2a-6403-43a2-81a9-089c9fce1e1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149732 4813 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149751 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4112ac2a-6403-43a2-81a9-089c9fce1e1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149764 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tssdv\" (UniqueName: \"kubernetes.io/projected/3acd2fcf-e5d3-43bc-b216-225edbc7114a-kube-api-access-tssdv\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149793 4813 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149803 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149813 4813 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3acd2fcf-e5d3-43bc-b216-225edbc7114a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149824 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3acd2fcf-e5d3-43bc-b216-225edbc7114a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.149834 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3acd2fcf-e5d3-43bc-b216-225edbc7114a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.150221 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45176ce9-7d7d-4342-b14f-b4dbf8628b37" (UID: "45176ce9-7d7d-4342-b14f-b4dbf8628b37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.154138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj" (OuterVolumeSpecName: "kube-api-access-t8shj") pod "45176ce9-7d7d-4342-b14f-b4dbf8628b37" (UID: "45176ce9-7d7d-4342-b14f-b4dbf8628b37"). InnerVolumeSpecName "kube-api-access-t8shj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.154333 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d" (OuterVolumeSpecName: "kube-api-access-slg2d") pod "4112ac2a-6403-43a2-81a9-089c9fce1e1c" (UID: "4112ac2a-6403-43a2-81a9-089c9fce1e1c"). InnerVolumeSpecName "kube-api-access-slg2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.251526 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8shj\" (UniqueName: \"kubernetes.io/projected/45176ce9-7d7d-4342-b14f-b4dbf8628b37-kube-api-access-t8shj\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.251900 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45176ce9-7d7d-4342-b14f-b4dbf8628b37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.251911 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg2d\" (UniqueName: \"kubernetes.io/projected/4112ac2a-6403-43a2-81a9-089c9fce1e1c-kube-api-access-slg2d\") on node \"crc\" DevicePath \"\"" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.556680 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-w2kw6" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.556682 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-w2kw6" event={"ID":"45176ce9-7d7d-4342-b14f-b4dbf8628b37","Type":"ContainerDied","Data":"86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e"} Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.556845 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86dfd08e13cfde98289e30e344a8e8f054686ffe0da19c8e812a492beab8914e" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.559068 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-cpnmh" event={"ID":"4112ac2a-6403-43a2-81a9-089c9fce1e1c","Type":"ContainerDied","Data":"63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197"} Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.559089 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ec1b0192dea54f98f3d856f607391d551366ef1cbc18656456ec8fdf2f0197" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.559176 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-cpnmh" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.563444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-fpmz6" event={"ID":"3acd2fcf-e5d3-43bc-b216-225edbc7114a","Type":"ContainerDied","Data":"a234f58d409d4245dd88a807a23703b4333611542c6c38f766eea6a20caff659"} Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.563474 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a234f58d409d4245dd88a807a23703b4333611542c6c38f766eea6a20caff659" Jan 29 16:52:59 crc kubenswrapper[4813]: I0129 16:52:59.563524 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-fpmz6" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.253049 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec85c172-6e2f-48f5-939a-fcd4fd5c93b4" path="/var/lib/kubelet/pods/ec85c172-6e2f-48f5-939a-fcd4fd5c93b4/volumes" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.326006 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:53:00 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:53:00 crc kubenswrapper[4813]: > Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.333945 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6jmvf"] Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334467 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ab991e-8fc7-4320-8627-0a1020527696" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334491 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ab991e-8fc7-4320-8627-0a1020527696" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334514 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ff68719-5a69-4407-80c9-130f7a261c04" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334524 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ff68719-5a69-4407-80c9-130f7a261c04" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334535 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a667f87f-f7c9-4f33-8e88-db86259f3111" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334543 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a667f87f-f7c9-4f33-8e88-db86259f3111" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334562 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4112ac2a-6403-43a2-81a9-089c9fce1e1c" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334569 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4112ac2a-6403-43a2-81a9-089c9fce1e1c" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334583 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2910884b-4b6f-4001-b7a6-cb47ad2b739b" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334590 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2910884b-4b6f-4001-b7a6-cb47ad2b739b" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334600 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45176ce9-7d7d-4342-b14f-b4dbf8628b37" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334607 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="45176ce9-7d7d-4342-b14f-b4dbf8628b37" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: E0129 16:53:00.334619 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3acd2fcf-e5d3-43bc-b216-225edbc7114a" containerName="swift-ring-rebalance" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334626 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3acd2fcf-e5d3-43bc-b216-225edbc7114a" containerName="swift-ring-rebalance" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334838 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="45176ce9-7d7d-4342-b14f-b4dbf8628b37" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334858 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ab991e-8fc7-4320-8627-0a1020527696" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334871 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ff68719-5a69-4407-80c9-130f7a261c04" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334881 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a667f87f-f7c9-4f33-8e88-db86259f3111" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334888 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2910884b-4b6f-4001-b7a6-cb47ad2b739b" containerName="mariadb-database-create" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334897 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4112ac2a-6403-43a2-81a9-089c9fce1e1c" containerName="mariadb-account-create-update" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.334906 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3acd2fcf-e5d3-43bc-b216-225edbc7114a" containerName="swift-ring-rebalance" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.335554 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.337953 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.339180 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ktqxj" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.340073 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6jmvf"] Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.344788 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.477215 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.477275 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.477394 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.477461 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgzds\" (UniqueName: \"kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.579335 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.579406 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgzds\" (UniqueName: \"kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.579456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.579476 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.584400 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.584522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.594485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.594711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgzds\" (UniqueName: \"kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds\") pod \"glance-db-sync-6jmvf\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:00 crc kubenswrapper[4813]: I0129 16:53:00.655615 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:01 crc kubenswrapper[4813]: I0129 16:53:01.146549 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6jmvf"] Jan 29 16:53:01 crc kubenswrapper[4813]: I0129 16:53:01.418337 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 16:53:01 crc kubenswrapper[4813]: I0129 16:53:01.582613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6jmvf" event={"ID":"3a9d9351-4988-4044-b47f-de154e889b47","Type":"ContainerStarted","Data":"fb0f2d032d3ad80e794506787a7f702684b197c265a118e92db8fa1b5c973491"} Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.446268 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-z6vxg"] Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.447985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.450057 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.465628 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z6vxg"] Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.535979 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.536059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnbk\" (UniqueName: \"kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.600412 4813 generic.go:334] "Generic (PLEG): container finished" podID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerID="53ac6aa3a537c0b6fd153a13fe50bef12959cc535873b4b51c03e2ead8c60e6c" exitCode=0 Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.600460 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerDied","Data":"53ac6aa3a537c0b6fd153a13fe50bef12959cc535873b4b51c03e2ead8c60e6c"} Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.637678 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnbk\" (UniqueName: \"kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.637854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.639055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.660172 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnbk\" (UniqueName: \"kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk\") pod \"root-account-create-update-z6vxg\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:03 crc kubenswrapper[4813]: I0129 16:53:03.794052 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.221362 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-z6vxg"] Jan 29 16:53:04 crc kubenswrapper[4813]: W0129 16:53:04.225976 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9f8e8a_1914_4ff8_a02e_044117f535b2.slice/crio-341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f WatchSource:0}: Error finding container 341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f: Status 404 returned error can't find the container with id 341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.608891 4813 generic.go:334] "Generic (PLEG): container finished" podID="ec9f8e8a-1914-4ff8-a02e-044117f535b2" containerID="a050c06e3172b7342e0c18cd9195f3454e570a16b8cefff22309e38893720db0" exitCode=0 Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.608959 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6vxg" event={"ID":"ec9f8e8a-1914-4ff8-a02e-044117f535b2","Type":"ContainerDied","Data":"a050c06e3172b7342e0c18cd9195f3454e570a16b8cefff22309e38893720db0"} Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.608986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6vxg" event={"ID":"ec9f8e8a-1914-4ff8-a02e-044117f535b2","Type":"ContainerStarted","Data":"341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f"} Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.612071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerStarted","Data":"a6111d6afee3055cda0c53ca25e0f552bbaa1c12f35b905627d640bdc35dfedb"} Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.612271 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 16:53:04 crc kubenswrapper[4813]: I0129 16:53:04.663812 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.997880527 podStartE2EDuration="1m24.663794266s" podCreationTimestamp="2026-01-29 16:51:40 +0000 UTC" firstStartedPulling="2026-01-29 16:51:42.450817759 +0000 UTC m=+1354.938020975" lastFinishedPulling="2026-01-29 16:52:29.116731488 +0000 UTC m=+1401.603934714" observedRunningTime="2026-01-29 16:53:04.65154115 +0000 UTC m=+1437.138744376" watchObservedRunningTime="2026-01-29 16:53:04.663794266 +0000 UTC m=+1437.150997482" Jan 29 16:53:05 crc kubenswrapper[4813]: I0129 16:53:05.294717 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:53:05 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:53:05 crc kubenswrapper[4813]: > Jan 29 16:53:09 crc kubenswrapper[4813]: I0129 16:53:09.655377 4813 generic.go:334] "Generic (PLEG): container finished" podID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerID="7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4" exitCode=0 Jan 29 16:53:09 crc kubenswrapper[4813]: I0129 16:53:09.655567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerDied","Data":"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4"} Jan 29 16:53:09 crc kubenswrapper[4813]: I0129 16:53:09.740205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:53:09 crc kubenswrapper[4813]: I0129 16:53:09.754567 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"swift-storage-0\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " pod="openstack/swift-storage-0" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.001686 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.297981 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 16:53:10 crc kubenswrapper[4813]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 16:53:10 crc kubenswrapper[4813]: > Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.319731 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.525782 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cmsdz-config-psd7x"] Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.526872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.532637 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.536893 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz-config-psd7x"] Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.662981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.663085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.663143 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.664431 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.664512 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp5n\" (UniqueName: \"kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.664544 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766209 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp5n\" (UniqueName: \"kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766327 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766367 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766388 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766443 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766762 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.766795 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.767243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.768742 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.792488 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp5n\" (UniqueName: \"kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n\") pod \"ovn-controller-cmsdz-config-psd7x\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:10 crc kubenswrapper[4813]: I0129 16:53:10.868743 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.515588 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.568152 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:12 crc kubenswrapper[4813]: E0129 16:53:12.568612 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9f8e8a-1914-4ff8-a02e-044117f535b2" containerName="mariadb-account-create-update" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.568628 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9f8e8a-1914-4ff8-a02e-044117f535b2" containerName="mariadb-account-create-update" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.568856 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9f8e8a-1914-4ff8-a02e-044117f535b2" containerName="mariadb-account-create-update" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.570711 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.596280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts\") pod \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.596722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldnbk\" (UniqueName: \"kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk\") pod \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\" (UID: \"ec9f8e8a-1914-4ff8-a02e-044117f535b2\") " Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.600585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec9f8e8a-1914-4ff8-a02e-044117f535b2" (UID: "ec9f8e8a-1914-4ff8-a02e-044117f535b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.601153 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk" (OuterVolumeSpecName: "kube-api-access-ldnbk") pod "ec9f8e8a-1914-4ff8-a02e-044117f535b2" (UID: "ec9f8e8a-1914-4ff8-a02e-044117f535b2"). InnerVolumeSpecName "kube-api-access-ldnbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.610407 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.698227 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.698584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfb7r\" (UniqueName: \"kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.698649 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.698795 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec9f8e8a-1914-4ff8-a02e-044117f535b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.698816 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldnbk\" (UniqueName: \"kubernetes.io/projected/ec9f8e8a-1914-4ff8-a02e-044117f535b2-kube-api-access-ldnbk\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.733654 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-z6vxg" event={"ID":"ec9f8e8a-1914-4ff8-a02e-044117f535b2","Type":"ContainerDied","Data":"341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f"} Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.733690 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341e1488c3ce5d0ef797ecee934fbfa433f810620ee42bdf67b7afb74dd4ab9f" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.733745 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-z6vxg" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.801586 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.802246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.802373 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfb7r\" (UniqueName: \"kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.802493 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.803345 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.831302 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfb7r\" (UniqueName: \"kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r\") pod \"community-operators-sfxl8\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:12 crc kubenswrapper[4813]: I0129 16:53:12.955878 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.282217 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz-config-psd7x"] Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.319048 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:53:14 crc kubenswrapper[4813]: W0129 16:53:13.359689 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2aa8580_8c90_4607_a906_c039e1e4c111.slice/crio-b692aee5352815bccee628d6bc7b15a8275f3ba4405ca90ce7e6202a6c7691b6 WatchSource:0}: Error finding container b692aee5352815bccee628d6bc7b15a8275f3ba4405ca90ce7e6202a6c7691b6: Status 404 returned error can't find the container with id b692aee5352815bccee628d6bc7b15a8275f3ba4405ca90ce7e6202a6c7691b6 Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.372800 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.517431 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:14 crc kubenswrapper[4813]: W0129 16:53:13.531382 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecbbbe0_d104_48a7_a58a_7e9dd83f0b37.slice/crio-c45b9321e7c3af7e34d6b4fe31566de3c613024d5b67ee3e6f365ad9241a9473 WatchSource:0}: Error finding container c45b9321e7c3af7e34d6b4fe31566de3c613024d5b67ee3e6f365ad9241a9473: Status 404 returned error can't find the container with id c45b9321e7c3af7e34d6b4fe31566de3c613024d5b67ee3e6f365ad9241a9473 Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.746618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-psd7x" event={"ID":"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215","Type":"ContainerStarted","Data":"24bea98423653dbe14915ccf830b210d9c2e824eccc8ccf86943d9691cbecf47"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.746818 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-psd7x" event={"ID":"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215","Type":"ContainerStarted","Data":"26639759c643b13c4d21489f7821827b84bc8540c3d5e2351ae16a305f743f65"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.748863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerStarted","Data":"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.749087 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.757023 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6jmvf" event={"ID":"3a9d9351-4988-4044-b47f-de154e889b47","Type":"ContainerStarted","Data":"1566217be5fc6671168e1436a1ffe45d1c7fe94af1c506065d72f0cfdaa35a2f"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.761897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerStarted","Data":"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.761930 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerStarted","Data":"c45b9321e7c3af7e34d6b4fe31566de3c613024d5b67ee3e6f365ad9241a9473"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.768686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"b692aee5352815bccee628d6bc7b15a8275f3ba4405ca90ce7e6202a6c7691b6"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.773810 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cmsdz-config-psd7x" podStartSLOduration=3.773787355 podStartE2EDuration="3.773787355s" podCreationTimestamp="2026-01-29 16:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:13.768878217 +0000 UTC m=+1446.256081433" watchObservedRunningTime="2026-01-29 16:53:13.773787355 +0000 UTC m=+1446.260990571" Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.821751 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.336720589 podStartE2EDuration="1m33.821732738s" podCreationTimestamp="2026-01-29 16:51:40 +0000 UTC" firstStartedPulling="2026-01-29 16:51:42.700127922 +0000 UTC m=+1355.187331138" lastFinishedPulling="2026-01-29 16:52:36.185140071 +0000 UTC m=+1408.672343287" observedRunningTime="2026-01-29 16:53:13.814050382 +0000 UTC m=+1446.301253598" watchObservedRunningTime="2026-01-29 16:53:13.821732738 +0000 UTC m=+1446.308935954" Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:13.838818 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6jmvf" podStartSLOduration=2.320601629 podStartE2EDuration="13.8387954s" podCreationTimestamp="2026-01-29 16:53:00 +0000 UTC" firstStartedPulling="2026-01-29 16:53:01.15163501 +0000 UTC m=+1433.638838226" lastFinishedPulling="2026-01-29 16:53:12.669828781 +0000 UTC m=+1445.157031997" observedRunningTime="2026-01-29 16:53:13.832216434 +0000 UTC m=+1446.319419670" watchObservedRunningTime="2026-01-29 16:53:13.8387954 +0000 UTC m=+1446.325998616" Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:14.778501 4813 generic.go:334] "Generic (PLEG): container finished" podID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerID="ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5" exitCode=0 Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:14.778592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerDied","Data":"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:14.780949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"357fa21995192338eea4cf618d4dd037f9d9c53370af078711f773adf08dcd27"} Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:14.782812 4813 generic.go:334] "Generic (PLEG): container finished" podID="abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" containerID="24bea98423653dbe14915ccf830b210d9c2e824eccc8ccf86943d9691cbecf47" exitCode=0 Jan 29 16:53:14 crc kubenswrapper[4813]: I0129 16:53:14.782856 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-psd7x" event={"ID":"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215","Type":"ContainerDied","Data":"24bea98423653dbe14915ccf830b210d9c2e824eccc8ccf86943d9691cbecf47"} Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.309493 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cmsdz" Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.797357 4813 generic.go:334] "Generic (PLEG): container finished" podID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerID="a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee" exitCode=0 Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.798660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerDied","Data":"a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee"} Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.802652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"34a647053be6be5646ecc99401b474db714d929ce717508cb1f765ce77b6559c"} Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.802697 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"632060180af065e41041ad9ffff2395b8df04b6c43e12683432988737705e770"} Jan 29 16:53:15 crc kubenswrapper[4813]: I0129 16:53:15.802711 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"c9a12f754af1b256a78355b7b3a3e4f909d6a20d352cb12126f67304e678fc2f"} Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.101225 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.200652 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.200729 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.200822 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.200884 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwp5n\" (UniqueName: \"kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.200912 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run" (OuterVolumeSpecName: "var-run") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202006 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts" (OuterVolumeSpecName: "scripts") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202153 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn\") pod \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\" (UID: \"abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215\") " Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202207 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202295 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202578 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202595 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202604 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.202612 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.203123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.214913 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n" (OuterVolumeSpecName: "kube-api-access-fwp5n") pod "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" (UID: "abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215"). InnerVolumeSpecName "kube-api-access-fwp5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.303677 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwp5n\" (UniqueName: \"kubernetes.io/projected/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-kube-api-access-fwp5n\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.303712 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.372913 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cmsdz-config-psd7x"] Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.385825 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cmsdz-config-psd7x"] Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.473163 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cmsdz-config-qtdjt"] Jan 29 16:53:16 crc kubenswrapper[4813]: E0129 16:53:16.473582 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" containerName="ovn-config" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.473605 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" containerName="ovn-config" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.473758 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" containerName="ovn-config" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.474303 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.483588 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz-config-qtdjt"] Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdbc\" (UniqueName: \"kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607800 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607832 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607887 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607927 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.607964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709078 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709416 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709530 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdbc\" (UniqueName: \"kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709569 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709592 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709754 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.709793 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.710085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.711896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.735381 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdbc\" (UniqueName: \"kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc\") pod \"ovn-controller-cmsdz-config-qtdjt\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.794909 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.829409 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"e9e8a5f597dfe66901c775ed9f99ffb45cc88a323355d8c8e377c086ac43eb70"} Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.829461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"6b315628dc01734b17c3bfcb292a1a9b22d94c5ec378c68879f38fea37cae951"} Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.843396 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26639759c643b13c4d21489f7821827b84bc8540c3d5e2351ae16a305f743f65" Jan 29 16:53:16 crc kubenswrapper[4813]: I0129 16:53:16.843462 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-psd7x" Jan 29 16:53:17 crc kubenswrapper[4813]: W0129 16:53:17.290522 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf25c3d1b_001b_4078_9798_c1f529d93cad.slice/crio-ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f WatchSource:0}: Error finding container ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f: Status 404 returned error can't find the container with id ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f Jan 29 16:53:17 crc kubenswrapper[4813]: I0129 16:53:17.297422 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cmsdz-config-qtdjt"] Jan 29 16:53:17 crc kubenswrapper[4813]: I0129 16:53:17.857100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-qtdjt" event={"ID":"f25c3d1b-001b-4078-9798-c1f529d93cad","Type":"ContainerStarted","Data":"ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f"} Jan 29 16:53:18 crc kubenswrapper[4813]: I0129 16:53:18.262991 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215" path="/var/lib/kubelet/pods/abd53f7c-29fa-47eb-a3c7-cc3c6ed5c215/volumes" Jan 29 16:53:20 crc kubenswrapper[4813]: I0129 16:53:20.880340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerStarted","Data":"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54"} Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.692295 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.894562 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"57a4c1b42a22a49c5352631c53b0536257db9090a489fe6d5ac1fae934fe7b10"} Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.894844 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"8300f3dbc427b40576635dc7f78e11d14f9f9f09cfe4720072bd30ea18c40a96"} Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.898362 4813 generic.go:334] "Generic (PLEG): container finished" podID="f25c3d1b-001b-4078-9798-c1f529d93cad" containerID="d1e6d50b020d56696baa547bab3ba30760ed87a983a01fa3686da6a9c211e646" exitCode=0 Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.898421 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-qtdjt" event={"ID":"f25c3d1b-001b-4078-9798-c1f529d93cad","Type":"ContainerDied","Data":"d1e6d50b020d56696baa547bab3ba30760ed87a983a01fa3686da6a9c211e646"} Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.946771 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sfxl8" podStartSLOduration=7.019206875 podStartE2EDuration="9.946752542s" podCreationTimestamp="2026-01-29 16:53:12 +0000 UTC" firstStartedPulling="2026-01-29 16:53:13.763831264 +0000 UTC m=+1446.251034480" lastFinishedPulling="2026-01-29 16:53:16.691376931 +0000 UTC m=+1449.178580147" observedRunningTime="2026-01-29 16:53:21.943254633 +0000 UTC m=+1454.430457849" watchObservedRunningTime="2026-01-29 16:53:21.946752542 +0000 UTC m=+1454.433955758" Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.976666 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ndpm2"] Jan 29 16:53:21 crc kubenswrapper[4813]: I0129 16:53:21.977707 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.012146 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ndpm2"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.036220 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.090580 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-x847l"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.092045 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.101927 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8fd5-account-create-update-5d6vp"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.104891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.108582 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.114128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8fd5-account-create-update-5d6vp"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.115789 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.115906 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnc4\" (UniqueName: \"kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.124697 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-x847l"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.194830 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5abd-account-create-update-572vc"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.196302 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.203620 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.208008 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5abd-account-create-update-572vc"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.217721 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.217818 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.217901 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnc4\" (UniqueName: \"kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.217947 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzdc\" (UniqueName: \"kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.218005 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.218046 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtrl\" (UniqueName: \"kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.219045 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.244575 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnc4\" (UniqueName: \"kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4\") pod \"cinder-db-create-ndpm2\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.296444 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319350 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319415 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djtrl\" (UniqueName: \"kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319489 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb546\" (UniqueName: \"kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.319646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxzdc\" (UniqueName: \"kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.320368 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.320538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.336712 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtrl\" (UniqueName: \"kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl\") pod \"barbican-db-create-x847l\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.371282 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxzdc\" (UniqueName: \"kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc\") pod \"cinder-8fd5-account-create-update-5d6vp\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.406972 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x847l" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.418033 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.420559 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.420656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb546\" (UniqueName: \"kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.421701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.450978 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb546\" (UniqueName: \"kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546\") pod \"barbican-5abd-account-create-update-572vc\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.496308 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-7w666"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.497681 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.502051 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.502667 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.502982 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.503346 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l2qv9" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.505004 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7w666"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.513911 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cb83-account-create-update-4sgpf"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.515162 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.519627 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.520526 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.554914 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb83-account-create-update-4sgpf"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.626075 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.626153 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv9dk\" (UniqueName: \"kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.626181 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.626226 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.626279 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8c7\" (UniqueName: \"kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.643408 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-q4cj5"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.645371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.667901 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q4cj5"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv9dk\" (UniqueName: \"kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfvq\" (UniqueName: \"kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730542 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730620 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8c7\" (UniqueName: \"kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.730675 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.732491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.738970 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.755513 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8c7\" (UniqueName: \"kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7\") pod \"neutron-cb83-account-create-update-4sgpf\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.757157 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.761481 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv9dk\" (UniqueName: \"kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk\") pod \"keystone-db-sync-7w666\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.832260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfvq\" (UniqueName: \"kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.832312 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.833138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.862891 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfvq\" (UniqueName: \"kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq\") pod \"neutron-db-create-q4cj5\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.909579 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.925523 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ndpm2"] Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.937703 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.962682 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.963008 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:22 crc kubenswrapper[4813]: I0129 16:53:22.991381 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.024559 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.055095 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8fd5-account-create-update-5d6vp"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.127646 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5abd-account-create-update-572vc"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.144629 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-x847l"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.734280 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb83-account-create-update-4sgpf"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.780331 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-q4cj5"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.793221 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-7w666"] Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.922858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8fd5-account-create-update-5d6vp" event={"ID":"58f70648-8c28-4159-9ae5-284478e0815c","Type":"ContainerStarted","Data":"212be768f897ca64f3d18c739828b8b696cf5563b1d0688911ec89515e813b83"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.922982 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8fd5-account-create-update-5d6vp" event={"ID":"58f70648-8c28-4159-9ae5-284478e0815c","Type":"ContainerStarted","Data":"a431c1c25df0f15af5bba18c908500182c0aacdbdcb20bd4eb7385d7f2441083"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.929897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5abd-account-create-update-572vc" event={"ID":"63edd52e-83f3-4a95-9b1d-3085129d0555","Type":"ContainerStarted","Data":"f609975417f95bdceff025a15be16c408c073985a17477d47096e6d647a7884d"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.929957 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5abd-account-create-update-572vc" event={"ID":"63edd52e-83f3-4a95-9b1d-3085129d0555","Type":"ContainerStarted","Data":"6973f6b2cf51860a4c15d0c56690119beb2a5ddcb84dc937d2d3651c6508584e"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.939370 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ndpm2" event={"ID":"676e4275-70b7-4ba7-abbc-57cd145d0ff1","Type":"ContainerStarted","Data":"9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.939424 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ndpm2" event={"ID":"676e4275-70b7-4ba7-abbc-57cd145d0ff1","Type":"ContainerStarted","Data":"68bda3ff53a21a995afb157da074e4213fc8fc6a6e6af25f9ab0e423894ce706"} Jan 29 16:53:23 crc kubenswrapper[4813]: I0129 16:53:23.949167 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x847l" event={"ID":"9f635f7b-65c8-4f91-8473-56bfd6775987","Type":"ContainerStarted","Data":"7933df34edd269e0b352ab21902615f0c7737e2cf96bf7a21903b810e7ae877e"} Jan 29 16:53:25 crc kubenswrapper[4813]: I0129 16:53:25.021776 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:25 crc kubenswrapper[4813]: I0129 16:53:25.062617 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ndpm2" podStartSLOduration=4.062596383 podStartE2EDuration="4.062596383s" podCreationTimestamp="2026-01-29 16:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:24.978452408 +0000 UTC m=+1457.465655624" watchObservedRunningTime="2026-01-29 16:53:25.062596383 +0000 UTC m=+1457.549799599" Jan 29 16:53:25 crc kubenswrapper[4813]: I0129 16:53:25.111678 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:25 crc kubenswrapper[4813]: I0129 16:53:25.983034 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5abd-account-create-update-572vc" podStartSLOduration=3.983011017 podStartE2EDuration="3.983011017s" podCreationTimestamp="2026-01-29 16:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:25.979874438 +0000 UTC m=+1458.467077654" watchObservedRunningTime="2026-01-29 16:53:25.983011017 +0000 UTC m=+1458.470214233" Jan 29 16:53:26 crc kubenswrapper[4813]: W0129 16:53:26.977533 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3d995a_ff26_44ea_a0c7_dd1959798b44.slice/crio-7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3 WatchSource:0}: Error finding container 7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3: Status 404 returned error can't find the container with id 7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3 Jan 29 16:53:26 crc kubenswrapper[4813]: I0129 16:53:26.996398 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz-config-qtdjt" event={"ID":"f25c3d1b-001b-4078-9798-c1f529d93cad","Type":"ContainerDied","Data":"ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f"} Jan 29 16:53:26 crc kubenswrapper[4813]: I0129 16:53:26.996942 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed5f150807b97e3922f9fac92ffce2057b73baedb0a0ea4a91474c8c6644b32f" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.002888 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-4sgpf" event={"ID":"3bb9308c-c3ab-42c8-8ffb-e813511fe562","Type":"ContainerStarted","Data":"99ccd775ab32b9966fef37d6ae4b4a26ad9cd3b3b609055f000a72286eebf377"} Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.003350 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sfxl8" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="registry-server" containerID="cri-o://f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54" gracePeriod=2 Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.028343 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8fd5-account-create-update-5d6vp" podStartSLOduration=5.028318626 podStartE2EDuration="5.028318626s" podCreationTimestamp="2026-01-29 16:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:27.026300509 +0000 UTC m=+1459.513503725" watchObservedRunningTime="2026-01-29 16:53:27.028318626 +0000 UTC m=+1459.515521842" Jan 29 16:53:27 crc kubenswrapper[4813]: E0129 16:53:27.274277 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecbbbe0_d104_48a7_a58a_7e9dd83f0b37.slice/crio-conmon-f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod676e4275_70b7_4ba7_abbc_57cd145d0ff1.slice/crio-9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfecbbbe0_d104_48a7_a58a_7e9dd83f0b37.slice/crio-f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod676e4275_70b7_4ba7_abbc_57cd145d0ff1.slice/crio-conmon-9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63edd52e_83f3_4a95_9b1d_3085129d0555.slice/crio-conmon-f609975417f95bdceff025a15be16c408c073985a17477d47096e6d647a7884d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.389803 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.527974 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.533517 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.533636 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.533756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdbc\" (UniqueName: \"kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.533903 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.533977 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.534077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn\") pod \"f25c3d1b-001b-4078-9798-c1f529d93cad\" (UID: \"f25c3d1b-001b-4078-9798-c1f529d93cad\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.534802 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.534867 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run" (OuterVolumeSpecName: "var-run") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.535687 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.537905 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.538192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts" (OuterVolumeSpecName: "scripts") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.542728 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc" (OuterVolumeSpecName: "kube-api-access-gbdbc") pod "f25c3d1b-001b-4078-9798-c1f529d93cad" (UID: "f25c3d1b-001b-4078-9798-c1f529d93cad"). InnerVolumeSpecName "kube-api-access-gbdbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.637352 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfb7r\" (UniqueName: \"kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r\") pod \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.637420 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities\") pod \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.637481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content\") pod \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\" (UID: \"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37\") " Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638139 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638160 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdbc\" (UniqueName: \"kubernetes.io/projected/f25c3d1b-001b-4078-9798-c1f529d93cad-kube-api-access-gbdbc\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638173 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638181 4813 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f25c3d1b-001b-4078-9798-c1f529d93cad-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638189 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638199 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f25c3d1b-001b-4078-9798-c1f529d93cad-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.638363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities" (OuterVolumeSpecName: "utilities") pod "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" (UID: "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.642166 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r" (OuterVolumeSpecName: "kube-api-access-lfb7r") pod "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" (UID: "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37"). InnerVolumeSpecName "kube-api-access-lfb7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.704679 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" (UID: "fecbbbe0-d104-48a7-a58a-7e9dd83f0b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.739470 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.739531 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfb7r\" (UniqueName: \"kubernetes.io/projected/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-kube-api-access-lfb7r\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:27 crc kubenswrapper[4813]: I0129 16:53:27.739548 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.015638 4813 generic.go:334] "Generic (PLEG): container finished" podID="676e4275-70b7-4ba7-abbc-57cd145d0ff1" containerID="9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.015706 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ndpm2" event={"ID":"676e4275-70b7-4ba7-abbc-57cd145d0ff1","Type":"ContainerDied","Data":"9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.017763 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-4sgpf" event={"ID":"3bb9308c-c3ab-42c8-8ffb-e813511fe562","Type":"ContainerStarted","Data":"999eeaabe251e8d162ef987bc0a172459387bd5c53954b54eebf0107ea211637"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.022347 4813 generic.go:334] "Generic (PLEG): container finished" podID="63edd52e-83f3-4a95-9b1d-3085129d0555" containerID="f609975417f95bdceff025a15be16c408c073985a17477d47096e6d647a7884d" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.022426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5abd-account-create-update-572vc" event={"ID":"63edd52e-83f3-4a95-9b1d-3085129d0555","Type":"ContainerDied","Data":"f609975417f95bdceff025a15be16c408c073985a17477d47096e6d647a7884d"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.026439 4813 generic.go:334] "Generic (PLEG): container finished" podID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerID="f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.026514 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerDied","Data":"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.026533 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfxl8" event={"ID":"fecbbbe0-d104-48a7-a58a-7e9dd83f0b37","Type":"ContainerDied","Data":"c45b9321e7c3af7e34d6b4fe31566de3c613024d5b67ee3e6f365ad9241a9473"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.026547 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfxl8" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.026582 4813 scope.go:117] "RemoveContainer" containerID="f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.048349 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f635f7b-65c8-4f91-8473-56bfd6775987" containerID="08482866b1cd8b860fea765deb6f1e75dd02ef7fd225c88c5a2214f17fa836f2" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.048472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x847l" event={"ID":"9f635f7b-65c8-4f91-8473-56bfd6775987","Type":"ContainerDied","Data":"08482866b1cd8b860fea765deb6f1e75dd02ef7fd225c88c5a2214f17fa836f2"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.052663 4813 generic.go:334] "Generic (PLEG): container finished" podID="58f70648-8c28-4159-9ae5-284478e0815c" containerID="212be768f897ca64f3d18c739828b8b696cf5563b1d0688911ec89515e813b83" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.052829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8fd5-account-create-update-5d6vp" event={"ID":"58f70648-8c28-4159-9ae5-284478e0815c","Type":"ContainerDied","Data":"212be768f897ca64f3d18c739828b8b696cf5563b1d0688911ec89515e813b83"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.070660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"85396d22d1bacb74de0dc15e3d6568fd0dd75a24110e76865a22e9698f5e03d7"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.070708 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"18d667fd3f825a0180d5ee212d62dc003f4d18f1f616c39664709f8fd3e22de4"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.075836 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7w666" event={"ID":"346c46a8-5fe9-44d0-882a-6ef6412e6e0d","Type":"ContainerStarted","Data":"1f0a869004a311b8dcea9e30e40db3accf6e50ef9d27551cf4968bd166676aec"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.076328 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cb83-account-create-update-4sgpf" podStartSLOduration=6.076307231 podStartE2EDuration="6.076307231s" podCreationTimestamp="2026-01-29 16:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:28.067630106 +0000 UTC m=+1460.554833322" watchObservedRunningTime="2026-01-29 16:53:28.076307231 +0000 UTC m=+1460.563510447" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.078226 4813 generic.go:334] "Generic (PLEG): container finished" podID="8b3d995a-ff26-44ea-a0c7-dd1959798b44" containerID="7152055c87a94a3b725894a406e8e6935ce7638af6f4e8afe49c22fe1ae5e922" exitCode=0 Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.078329 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz-config-qtdjt" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.079894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q4cj5" event={"ID":"8b3d995a-ff26-44ea-a0c7-dd1959798b44","Type":"ContainerDied","Data":"7152055c87a94a3b725894a406e8e6935ce7638af6f4e8afe49c22fe1ae5e922"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.079962 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q4cj5" event={"ID":"8b3d995a-ff26-44ea-a0c7-dd1959798b44","Type":"ContainerStarted","Data":"7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3"} Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.082354 4813 scope.go:117] "RemoveContainer" containerID="a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.140274 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.141508 4813 scope.go:117] "RemoveContainer" containerID="ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.152057 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sfxl8"] Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.183720 4813 scope.go:117] "RemoveContainer" containerID="f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54" Jan 29 16:53:28 crc kubenswrapper[4813]: E0129 16:53:28.184231 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54\": container with ID starting with f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54 not found: ID does not exist" containerID="f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.184283 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54"} err="failed to get container status \"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54\": rpc error: code = NotFound desc = could not find container \"f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54\": container with ID starting with f28fa0ca38d2b0ee31d1fa7b6f744a036c7e318394e6076ef5e12edaa4380a54 not found: ID does not exist" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.184311 4813 scope.go:117] "RemoveContainer" containerID="a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee" Jan 29 16:53:28 crc kubenswrapper[4813]: E0129 16:53:28.186313 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee\": container with ID starting with a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee not found: ID does not exist" containerID="a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.186352 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee"} err="failed to get container status \"a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee\": rpc error: code = NotFound desc = could not find container \"a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee\": container with ID starting with a2c845d91a13567aa6061b9fe0447ae03566b7705646fce9ff7a156f9862dcee not found: ID does not exist" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.186383 4813 scope.go:117] "RemoveContainer" containerID="ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5" Jan 29 16:53:28 crc kubenswrapper[4813]: E0129 16:53:28.193959 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5\": container with ID starting with ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5 not found: ID does not exist" containerID="ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.193991 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5"} err="failed to get container status \"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5\": rpc error: code = NotFound desc = could not find container \"ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5\": container with ID starting with ad488aa6b8add8db3f0d45c8b386f1ba34ae4c8cd83da71944c570e4c5f5bee5 not found: ID does not exist" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.252193 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" path="/var/lib/kubelet/pods/fecbbbe0-d104-48a7-a58a-7e9dd83f0b37/volumes" Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.484293 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cmsdz-config-qtdjt"] Jan 29 16:53:28 crc kubenswrapper[4813]: I0129 16:53:28.493239 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cmsdz-config-qtdjt"] Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.105411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-4sgpf" event={"ID":"3bb9308c-c3ab-42c8-8ffb-e813511fe562","Type":"ContainerDied","Data":"999eeaabe251e8d162ef987bc0a172459387bd5c53954b54eebf0107ea211637"} Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.105262 4813 generic.go:334] "Generic (PLEG): container finished" podID="3bb9308c-c3ab-42c8-8ffb-e813511fe562" containerID="999eeaabe251e8d162ef987bc0a172459387bd5c53954b54eebf0107ea211637" exitCode=0 Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.118001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"0cf8a700e82655607fac9d5a91b1fe7324cf5f6a7f959b303b9cc0b73826bb23"} Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.118075 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"0912400cbaf3d64dac1bbb048041811be489912643b2b55d8cdd00b6904d5618"} Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.118100 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"974ef80b14499b0e09dc5f020a2ac326c98830e1859b4884bb3296cb6d847ee5"} Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.118130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"2cc8dee52a522946f284add3268d6d09db41b3c578932c60a72ffe29a472f072"} Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.527179 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.575770 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhnc4\" (UniqueName: \"kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4\") pod \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.575858 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts\") pod \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\" (UID: \"676e4275-70b7-4ba7-abbc-57cd145d0ff1\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.577042 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "676e4275-70b7-4ba7-abbc-57cd145d0ff1" (UID: "676e4275-70b7-4ba7-abbc-57cd145d0ff1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.591402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4" (OuterVolumeSpecName: "kube-api-access-rhnc4") pod "676e4275-70b7-4ba7-abbc-57cd145d0ff1" (UID: "676e4275-70b7-4ba7-abbc-57cd145d0ff1"). InnerVolumeSpecName "kube-api-access-rhnc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.634717 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.637994 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.648942 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x847l" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.661797 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677229 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts\") pod \"63edd52e-83f3-4a95-9b1d-3085129d0555\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677297 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts\") pod \"9f635f7b-65c8-4f91-8473-56bfd6775987\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677345 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vfvq\" (UniqueName: \"kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq\") pod \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677388 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts\") pod \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\" (UID: \"8b3d995a-ff26-44ea-a0c7-dd1959798b44\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677423 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxzdc\" (UniqueName: \"kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc\") pod \"58f70648-8c28-4159-9ae5-284478e0815c\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677503 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtrl\" (UniqueName: \"kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl\") pod \"9f635f7b-65c8-4f91-8473-56bfd6775987\" (UID: \"9f635f7b-65c8-4f91-8473-56bfd6775987\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts\") pod \"58f70648-8c28-4159-9ae5-284478e0815c\" (UID: \"58f70648-8c28-4159-9ae5-284478e0815c\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677563 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb546\" (UniqueName: \"kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546\") pod \"63edd52e-83f3-4a95-9b1d-3085129d0555\" (UID: \"63edd52e-83f3-4a95-9b1d-3085129d0555\") " Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.677993 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9f635f7b-65c8-4f91-8473-56bfd6775987" (UID: "9f635f7b-65c8-4f91-8473-56bfd6775987"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.678024 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhnc4\" (UniqueName: \"kubernetes.io/projected/676e4275-70b7-4ba7-abbc-57cd145d0ff1-kube-api-access-rhnc4\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.678045 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/676e4275-70b7-4ba7-abbc-57cd145d0ff1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.678266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63edd52e-83f3-4a95-9b1d-3085129d0555" (UID: "63edd52e-83f3-4a95-9b1d-3085129d0555"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.678929 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b3d995a-ff26-44ea-a0c7-dd1959798b44" (UID: "8b3d995a-ff26-44ea-a0c7-dd1959798b44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.680738 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58f70648-8c28-4159-9ae5-284478e0815c" (UID: "58f70648-8c28-4159-9ae5-284478e0815c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.682064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546" (OuterVolumeSpecName: "kube-api-access-mb546") pod "63edd52e-83f3-4a95-9b1d-3085129d0555" (UID: "63edd52e-83f3-4a95-9b1d-3085129d0555"). InnerVolumeSpecName "kube-api-access-mb546". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.683409 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc" (OuterVolumeSpecName: "kube-api-access-cxzdc") pod "58f70648-8c28-4159-9ae5-284478e0815c" (UID: "58f70648-8c28-4159-9ae5-284478e0815c"). InnerVolumeSpecName "kube-api-access-cxzdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.692378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl" (OuterVolumeSpecName: "kube-api-access-djtrl") pod "9f635f7b-65c8-4f91-8473-56bfd6775987" (UID: "9f635f7b-65c8-4f91-8473-56bfd6775987"). InnerVolumeSpecName "kube-api-access-djtrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.694282 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq" (OuterVolumeSpecName: "kube-api-access-9vfvq") pod "8b3d995a-ff26-44ea-a0c7-dd1959798b44" (UID: "8b3d995a-ff26-44ea-a0c7-dd1959798b44"). InnerVolumeSpecName "kube-api-access-9vfvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779666 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63edd52e-83f3-4a95-9b1d-3085129d0555-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779707 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9f635f7b-65c8-4f91-8473-56bfd6775987-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779716 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vfvq\" (UniqueName: \"kubernetes.io/projected/8b3d995a-ff26-44ea-a0c7-dd1959798b44-kube-api-access-9vfvq\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779731 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b3d995a-ff26-44ea-a0c7-dd1959798b44-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779743 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxzdc\" (UniqueName: \"kubernetes.io/projected/58f70648-8c28-4159-9ae5-284478e0815c-kube-api-access-cxzdc\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779755 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djtrl\" (UniqueName: \"kubernetes.io/projected/9f635f7b-65c8-4f91-8473-56bfd6775987-kube-api-access-djtrl\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779765 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f70648-8c28-4159-9ae5-284478e0815c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:29 crc kubenswrapper[4813]: I0129 16:53:29.779776 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb546\" (UniqueName: \"kubernetes.io/projected/63edd52e-83f3-4a95-9b1d-3085129d0555-kube-api-access-mb546\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.127796 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ndpm2" event={"ID":"676e4275-70b7-4ba7-abbc-57cd145d0ff1","Type":"ContainerDied","Data":"68bda3ff53a21a995afb157da074e4213fc8fc6a6e6af25f9ab0e423894ce706"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.127846 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bda3ff53a21a995afb157da074e4213fc8fc6a6e6af25f9ab0e423894ce706" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.127819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ndpm2" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.130038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x847l" event={"ID":"9f635f7b-65c8-4f91-8473-56bfd6775987","Type":"ContainerDied","Data":"7933df34edd269e0b352ab21902615f0c7737e2cf96bf7a21903b810e7ae877e"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.130067 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x847l" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.130083 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7933df34edd269e0b352ab21902615f0c7737e2cf96bf7a21903b810e7ae877e" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.132571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8fd5-account-create-update-5d6vp" event={"ID":"58f70648-8c28-4159-9ae5-284478e0815c","Type":"ContainerDied","Data":"a431c1c25df0f15af5bba18c908500182c0aacdbdcb20bd4eb7385d7f2441083"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.132620 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a431c1c25df0f15af5bba18c908500182c0aacdbdcb20bd4eb7385d7f2441083" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.132582 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8fd5-account-create-update-5d6vp" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.139523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerStarted","Data":"3e0e0ef163807a59b2a6a361cb8f9f50111fa4482f203c2cb3078cc29529bdee"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.142717 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-q4cj5" event={"ID":"8b3d995a-ff26-44ea-a0c7-dd1959798b44","Type":"ContainerDied","Data":"7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.142737 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-q4cj5" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.142750 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e22fc51fe5d331fbbc785bcbc9ba6677a115cad6797ee583c0c986cd81844c3" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.146152 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5abd-account-create-update-572vc" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.146226 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5abd-account-create-update-572vc" event={"ID":"63edd52e-83f3-4a95-9b1d-3085129d0555","Type":"ContainerDied","Data":"6973f6b2cf51860a4c15d0c56690119beb2a5ddcb84dc937d2d3651c6508584e"} Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.146286 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6973f6b2cf51860a4c15d0c56690119beb2a5ddcb84dc937d2d3651c6508584e" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.186866 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.458856209 podStartE2EDuration="54.186847151s" podCreationTimestamp="2026-01-29 16:52:36 +0000 UTC" firstStartedPulling="2026-01-29 16:53:13.372395028 +0000 UTC m=+1445.859598244" lastFinishedPulling="2026-01-29 16:53:27.10038596 +0000 UTC m=+1459.587589186" observedRunningTime="2026-01-29 16:53:30.177773685 +0000 UTC m=+1462.664976901" watchObservedRunningTime="2026-01-29 16:53:30.186847151 +0000 UTC m=+1462.674050367" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.240380 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.240462 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.251830 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25c3d1b-001b-4078-9798-c1f529d93cad" path="/var/lib/kubelet/pods/f25c3d1b-001b-4078-9798-c1f529d93cad/volumes" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457155 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457856 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676e4275-70b7-4ba7-abbc-57cd145d0ff1" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457875 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="676e4275-70b7-4ba7-abbc-57cd145d0ff1" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457889 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="registry-server" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457896 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="registry-server" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457913 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f635f7b-65c8-4f91-8473-56bfd6775987" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457920 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f635f7b-65c8-4f91-8473-56bfd6775987" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457932 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25c3d1b-001b-4078-9798-c1f529d93cad" containerName="ovn-config" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457939 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25c3d1b-001b-4078-9798-c1f529d93cad" containerName="ovn-config" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457947 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="extract-utilities" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457954 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="extract-utilities" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457965 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3d995a-ff26-44ea-a0c7-dd1959798b44" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.457973 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3d995a-ff26-44ea-a0c7-dd1959798b44" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.457995 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f70648-8c28-4159-9ae5-284478e0815c" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458002 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f70648-8c28-4159-9ae5-284478e0815c" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.458011 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="extract-content" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458020 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="extract-content" Jan 29 16:53:30 crc kubenswrapper[4813]: E0129 16:53:30.458032 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63edd52e-83f3-4a95-9b1d-3085129d0555" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458041 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="63edd52e-83f3-4a95-9b1d-3085129d0555" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458227 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3d995a-ff26-44ea-a0c7-dd1959798b44" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458245 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecbbbe0-d104-48a7-a58a-7e9dd83f0b37" containerName="registry-server" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458254 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f70648-8c28-4159-9ae5-284478e0815c" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458265 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="676e4275-70b7-4ba7-abbc-57cd145d0ff1" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458276 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25c3d1b-001b-4078-9798-c1f529d93cad" containerName="ovn-config" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458288 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="63edd52e-83f3-4a95-9b1d-3085129d0555" containerName="mariadb-account-create-update" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.458304 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f635f7b-65c8-4f91-8473-56bfd6775987" containerName="mariadb-database-create" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.459650 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.466922 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.478270 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498259 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmpp\" (UniqueName: \"kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.498422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616449 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616560 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmpp\" (UniqueName: \"kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616667 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616697 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.616788 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.617900 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.618629 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.621235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.625809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.626327 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.644827 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmpp\" (UniqueName: \"kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp\") pod \"dnsmasq-dns-8db84466c-vdpbq\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:30 crc kubenswrapper[4813]: I0129 16:53:30.793200 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:32 crc kubenswrapper[4813]: I0129 16:53:32.037495 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:53:32 crc kubenswrapper[4813]: I0129 16:53:32.163601 4813 generic.go:334] "Generic (PLEG): container finished" podID="3a9d9351-4988-4044-b47f-de154e889b47" containerID="1566217be5fc6671168e1436a1ffe45d1c7fe94af1c506065d72f0cfdaa35a2f" exitCode=0 Jan 29 16:53:32 crc kubenswrapper[4813]: I0129 16:53:32.163656 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6jmvf" event={"ID":"3a9d9351-4988-4044-b47f-de154e889b47","Type":"ContainerDied","Data":"1566217be5fc6671168e1436a1ffe45d1c7fe94af1c506065d72f0cfdaa35a2f"} Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.202855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-4sgpf" event={"ID":"3bb9308c-c3ab-42c8-8ffb-e813511fe562","Type":"ContainerDied","Data":"99ccd775ab32b9966fef37d6ae4b4a26ad9cd3b3b609055f000a72286eebf377"} Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.203204 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ccd775ab32b9966fef37d6ae4b4a26ad9cd3b3b609055f000a72286eebf377" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.390919 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.566774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts\") pod \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.567331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8c7\" (UniqueName: \"kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7\") pod \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\" (UID: \"3bb9308c-c3ab-42c8-8ffb-e813511fe562\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.567517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bb9308c-c3ab-42c8-8ffb-e813511fe562" (UID: "3bb9308c-c3ab-42c8-8ffb-e813511fe562"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.569813 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bb9308c-c3ab-42c8-8ffb-e813511fe562-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.576396 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7" (OuterVolumeSpecName: "kube-api-access-tb8c7") pod "3bb9308c-c3ab-42c8-8ffb-e813511fe562" (UID: "3bb9308c-c3ab-42c8-8ffb-e813511fe562"). InnerVolumeSpecName "kube-api-access-tb8c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.633256 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.672389 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8c7\" (UniqueName: \"kubernetes.io/projected/3bb9308c-c3ab-42c8-8ffb-e813511fe562-kube-api-access-tb8c7\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:33 crc kubenswrapper[4813]: W0129 16:53:33.700802 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fcb63b6_58c0_4f25_b300_a569d79e6815.slice/crio-c1badbec5b23de1a3b2a98cbf534d0a10f86bb25902f335bd0648828bc699a8f WatchSource:0}: Error finding container c1badbec5b23de1a3b2a98cbf534d0a10f86bb25902f335bd0648828bc699a8f: Status 404 returned error can't find the container with id c1badbec5b23de1a3b2a98cbf534d0a10f86bb25902f335bd0648828bc699a8f Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.714841 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.876455 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle\") pod \"3a9d9351-4988-4044-b47f-de154e889b47\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.877100 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data\") pod \"3a9d9351-4988-4044-b47f-de154e889b47\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.877158 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgzds\" (UniqueName: \"kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds\") pod \"3a9d9351-4988-4044-b47f-de154e889b47\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.877202 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data\") pod \"3a9d9351-4988-4044-b47f-de154e889b47\" (UID: \"3a9d9351-4988-4044-b47f-de154e889b47\") " Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.881423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a9d9351-4988-4044-b47f-de154e889b47" (UID: "3a9d9351-4988-4044-b47f-de154e889b47"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.881463 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds" (OuterVolumeSpecName: "kube-api-access-mgzds") pod "3a9d9351-4988-4044-b47f-de154e889b47" (UID: "3a9d9351-4988-4044-b47f-de154e889b47"). InnerVolumeSpecName "kube-api-access-mgzds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.900914 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a9d9351-4988-4044-b47f-de154e889b47" (UID: "3a9d9351-4988-4044-b47f-de154e889b47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.931798 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data" (OuterVolumeSpecName: "config-data") pod "3a9d9351-4988-4044-b47f-de154e889b47" (UID: "3a9d9351-4988-4044-b47f-de154e889b47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.979472 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.979505 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.979518 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a9d9351-4988-4044-b47f-de154e889b47-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:33 crc kubenswrapper[4813]: I0129 16:53:33.979527 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgzds\" (UniqueName: \"kubernetes.io/projected/3a9d9351-4988-4044-b47f-de154e889b47-kube-api-access-mgzds\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.213438 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerID="f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436" exitCode=0 Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.213672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" event={"ID":"1fcb63b6-58c0-4f25-b300-a569d79e6815","Type":"ContainerDied","Data":"f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436"} Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.213725 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" event={"ID":"1fcb63b6-58c0-4f25-b300-a569d79e6815","Type":"ContainerStarted","Data":"c1badbec5b23de1a3b2a98cbf534d0a10f86bb25902f335bd0648828bc699a8f"} Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.217024 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6jmvf" event={"ID":"3a9d9351-4988-4044-b47f-de154e889b47","Type":"ContainerDied","Data":"fb0f2d032d3ad80e794506787a7f702684b197c265a118e92db8fa1b5c973491"} Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.217069 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0f2d032d3ad80e794506787a7f702684b197c265a118e92db8fa1b5c973491" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.217161 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6jmvf" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.219026 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-4sgpf" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.219194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7w666" event={"ID":"346c46a8-5fe9-44d0-882a-6ef6412e6e0d","Type":"ContainerStarted","Data":"5d34de84fb2abfbb4654d44b445e12d4d418d88954b7a25ac8505eb022196a89"} Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.265802 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-7w666" podStartSLOduration=6.044253166 podStartE2EDuration="12.265778451s" podCreationTimestamp="2026-01-29 16:53:22 +0000 UTC" firstStartedPulling="2026-01-29 16:53:26.978345656 +0000 UTC m=+1459.465548872" lastFinishedPulling="2026-01-29 16:53:33.199870941 +0000 UTC m=+1465.687074157" observedRunningTime="2026-01-29 16:53:34.259745271 +0000 UTC m=+1466.746948487" watchObservedRunningTime="2026-01-29 16:53:34.265778451 +0000 UTC m=+1466.752981667" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.534608 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.581296 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:53:34 crc kubenswrapper[4813]: E0129 16:53:34.581631 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9d9351-4988-4044-b47f-de154e889b47" containerName="glance-db-sync" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.581647 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9d9351-4988-4044-b47f-de154e889b47" containerName="glance-db-sync" Jan 29 16:53:34 crc kubenswrapper[4813]: E0129 16:53:34.581670 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb9308c-c3ab-42c8-8ffb-e813511fe562" containerName="mariadb-account-create-update" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.581677 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb9308c-c3ab-42c8-8ffb-e813511fe562" containerName="mariadb-account-create-update" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.581823 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb9308c-c3ab-42c8-8ffb-e813511fe562" containerName="mariadb-account-create-update" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.581841 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9d9351-4988-4044-b47f-de154e889b47" containerName="glance-db-sync" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.582726 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.603591 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696024 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696078 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm68v\" (UniqueName: \"kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696138 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696163 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696180 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.696200 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797510 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm68v\" (UniqueName: \"kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797610 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.797663 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.798592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.798843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.800769 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.801902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.802023 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.832645 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm68v\" (UniqueName: \"kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v\") pod \"dnsmasq-dns-74dfc89d77-mczvv\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:34 crc kubenswrapper[4813]: I0129 16:53:34.906283 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.229851 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="dnsmasq-dns" containerID="cri-o://c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1" gracePeriod=10 Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.230475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" event={"ID":"1fcb63b6-58c0-4f25-b300-a569d79e6815","Type":"ContainerStarted","Data":"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1"} Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.230542 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.261466 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" podStartSLOduration=5.2614459700000005 podStartE2EDuration="5.26144597s" podCreationTimestamp="2026-01-29 16:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:35.252104926 +0000 UTC m=+1467.739325523" watchObservedRunningTime="2026-01-29 16:53:35.26144597 +0000 UTC m=+1467.748649186" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.359096 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:53:35 crc kubenswrapper[4813]: W0129 16:53:35.393718 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabf498ff_9c45_421e_9db3_f114936b22e8.slice/crio-199f3f7af77b10a2cfbdea72d60b2b3ca988b1141f75564ba6c39d3b22da9a14 WatchSource:0}: Error finding container 199f3f7af77b10a2cfbdea72d60b2b3ca988b1141f75564ba6c39d3b22da9a14: Status 404 returned error can't find the container with id 199f3f7af77b10a2cfbdea72d60b2b3ca988b1141f75564ba6c39d3b22da9a14 Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.661514 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815491 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815653 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815750 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djmpp\" (UniqueName: \"kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815814 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815845 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.815871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config\") pod \"1fcb63b6-58c0-4f25-b300-a569d79e6815\" (UID: \"1fcb63b6-58c0-4f25-b300-a569d79e6815\") " Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.822842 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp" (OuterVolumeSpecName: "kube-api-access-djmpp") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "kube-api-access-djmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.862179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.863555 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.864654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config" (OuterVolumeSpecName: "config") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.872014 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.873669 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fcb63b6-58c0-4f25-b300-a569d79e6815" (UID: "1fcb63b6-58c0-4f25-b300-a569d79e6815"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918304 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918355 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918369 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918380 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918391 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fcb63b6-58c0-4f25-b300-a569d79e6815-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:35 crc kubenswrapper[4813]: I0129 16:53:35.918403 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djmpp\" (UniqueName: \"kubernetes.io/projected/1fcb63b6-58c0-4f25-b300-a569d79e6815-kube-api-access-djmpp\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.238888 4813 generic.go:334] "Generic (PLEG): container finished" podID="abf498ff-9c45-421e-9db3-f114936b22e8" containerID="5e10863fe4481e09897751870530afb49988df0d247f791f62ba2296c587cffb" exitCode=0 Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.242992 4813 generic.go:334] "Generic (PLEG): container finished" podID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerID="c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1" exitCode=0 Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.243179 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.250832 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" event={"ID":"abf498ff-9c45-421e-9db3-f114936b22e8","Type":"ContainerDied","Data":"5e10863fe4481e09897751870530afb49988df0d247f791f62ba2296c587cffb"} Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.250876 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" event={"ID":"abf498ff-9c45-421e-9db3-f114936b22e8","Type":"ContainerStarted","Data":"199f3f7af77b10a2cfbdea72d60b2b3ca988b1141f75564ba6c39d3b22da9a14"} Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.250890 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" event={"ID":"1fcb63b6-58c0-4f25-b300-a569d79e6815","Type":"ContainerDied","Data":"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1"} Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.250904 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" event={"ID":"1fcb63b6-58c0-4f25-b300-a569d79e6815","Type":"ContainerDied","Data":"c1badbec5b23de1a3b2a98cbf534d0a10f86bb25902f335bd0648828bc699a8f"} Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.250924 4813 scope.go:117] "RemoveContainer" containerID="c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.362166 4813 scope.go:117] "RemoveContainer" containerID="f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.397376 4813 scope.go:117] "RemoveContainer" containerID="c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1" Jan 29 16:53:36 crc kubenswrapper[4813]: E0129 16:53:36.398292 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1\": container with ID starting with c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1 not found: ID does not exist" containerID="c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.398332 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1"} err="failed to get container status \"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1\": rpc error: code = NotFound desc = could not find container \"c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1\": container with ID starting with c43a41747ffbff7d27186fce4a946f1dcfe3c7b7b25350768e79220c3ec301b1 not found: ID does not exist" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.398356 4813 scope.go:117] "RemoveContainer" containerID="f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436" Jan 29 16:53:36 crc kubenswrapper[4813]: E0129 16:53:36.398738 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436\": container with ID starting with f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436 not found: ID does not exist" containerID="f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436" Jan 29 16:53:36 crc kubenswrapper[4813]: I0129 16:53:36.398762 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436"} err="failed to get container status \"f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436\": rpc error: code = NotFound desc = could not find container \"f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436\": container with ID starting with f12d4545ede5e93b9c778b7828ae42e4d7692c68e9e5ac8e7a135f78b30de436 not found: ID does not exist" Jan 29 16:53:37 crc kubenswrapper[4813]: I0129 16:53:37.254339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" event={"ID":"abf498ff-9c45-421e-9db3-f114936b22e8","Type":"ContainerStarted","Data":"c36e3f815fe5757c306487f5bb4e69ca1c9e3e2b468d633fb7a3705270397f42"} Jan 29 16:53:37 crc kubenswrapper[4813]: I0129 16:53:37.254633 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:37 crc kubenswrapper[4813]: I0129 16:53:37.277948 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podStartSLOduration=3.277928805 podStartE2EDuration="3.277928805s" podCreationTimestamp="2026-01-29 16:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:37.274175289 +0000 UTC m=+1469.761378505" watchObservedRunningTime="2026-01-29 16:53:37.277928805 +0000 UTC m=+1469.765132021" Jan 29 16:53:44 crc kubenswrapper[4813]: I0129 16:53:44.356539 4813 generic.go:334] "Generic (PLEG): container finished" podID="346c46a8-5fe9-44d0-882a-6ef6412e6e0d" containerID="5d34de84fb2abfbb4654d44b445e12d4d418d88954b7a25ac8505eb022196a89" exitCode=0 Jan 29 16:53:44 crc kubenswrapper[4813]: I0129 16:53:44.356636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7w666" event={"ID":"346c46a8-5fe9-44d0-882a-6ef6412e6e0d","Type":"ContainerDied","Data":"5d34de84fb2abfbb4654d44b445e12d4d418d88954b7a25ac8505eb022196a89"} Jan 29 16:53:44 crc kubenswrapper[4813]: I0129 16:53:44.908343 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:53:44 crc kubenswrapper[4813]: I0129 16:53:44.965784 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:53:44 crc kubenswrapper[4813]: I0129 16:53:44.966128 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="dnsmasq-dns" containerID="cri-o://657006dc517868a8d5f18265519f539855679f9f9dc88ebe6e1bed7196b535b3" gracePeriod=10 Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.380173 4813 generic.go:334] "Generic (PLEG): container finished" podID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerID="657006dc517868a8d5f18265519f539855679f9f9dc88ebe6e1bed7196b535b3" exitCode=0 Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.380239 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" event={"ID":"6c75534b-dabc-4df6-bd87-515d6ec3e73d","Type":"ContainerDied","Data":"657006dc517868a8d5f18265519f539855679f9f9dc88ebe6e1bed7196b535b3"} Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.380598 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" event={"ID":"6c75534b-dabc-4df6-bd87-515d6ec3e73d","Type":"ContainerDied","Data":"d8e082cee304858bf6ce8324fdf6d52bb1b4c0b9bff33f5f6c5642e63994f950"} Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.380618 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e082cee304858bf6ce8324fdf6d52bb1b4c0b9bff33f5f6c5642e63994f950" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.451868 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.599251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb\") pod \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.599371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlbp7\" (UniqueName: \"kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7\") pod \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.599428 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb\") pod \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.599458 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc\") pod \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.599481 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config\") pod \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\" (UID: \"6c75534b-dabc-4df6-bd87-515d6ec3e73d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.606988 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7" (OuterVolumeSpecName: "kube-api-access-qlbp7") pod "6c75534b-dabc-4df6-bd87-515d6ec3e73d" (UID: "6c75534b-dabc-4df6-bd87-515d6ec3e73d"). InnerVolumeSpecName "kube-api-access-qlbp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.650194 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c75534b-dabc-4df6-bd87-515d6ec3e73d" (UID: "6c75534b-dabc-4df6-bd87-515d6ec3e73d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.655547 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c75534b-dabc-4df6-bd87-515d6ec3e73d" (UID: "6c75534b-dabc-4df6-bd87-515d6ec3e73d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.658898 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config" (OuterVolumeSpecName: "config") pod "6c75534b-dabc-4df6-bd87-515d6ec3e73d" (UID: "6c75534b-dabc-4df6-bd87-515d6ec3e73d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.659573 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c75534b-dabc-4df6-bd87-515d6ec3e73d" (UID: "6c75534b-dabc-4df6-bd87-515d6ec3e73d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.702135 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.702165 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlbp7\" (UniqueName: \"kubernetes.io/projected/6c75534b-dabc-4df6-bd87-515d6ec3e73d-kube-api-access-qlbp7\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.702177 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.702185 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.702195 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c75534b-dabc-4df6-bd87-515d6ec3e73d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.741618 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.904595 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data\") pod \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.904744 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv9dk\" (UniqueName: \"kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk\") pod \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.904866 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle\") pod \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\" (UID: \"346c46a8-5fe9-44d0-882a-6ef6412e6e0d\") " Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.909164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk" (OuterVolumeSpecName: "kube-api-access-cv9dk") pod "346c46a8-5fe9-44d0-882a-6ef6412e6e0d" (UID: "346c46a8-5fe9-44d0-882a-6ef6412e6e0d"). InnerVolumeSpecName "kube-api-access-cv9dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.930420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "346c46a8-5fe9-44d0-882a-6ef6412e6e0d" (UID: "346c46a8-5fe9-44d0-882a-6ef6412e6e0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:45 crc kubenswrapper[4813]: I0129 16:53:45.948420 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data" (OuterVolumeSpecName: "config-data") pod "346c46a8-5fe9-44d0-882a-6ef6412e6e0d" (UID: "346c46a8-5fe9-44d0-882a-6ef6412e6e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.006411 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.006471 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv9dk\" (UniqueName: \"kubernetes.io/projected/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-kube-api-access-cv9dk\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.006485 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/346c46a8-5fe9-44d0-882a-6ef6412e6e0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.390750 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-7w666" event={"ID":"346c46a8-5fe9-44d0-882a-6ef6412e6e0d","Type":"ContainerDied","Data":"1f0a869004a311b8dcea9e30e40db3accf6e50ef9d27551cf4968bd166676aec"} Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.391459 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f0a869004a311b8dcea9e30e40db3accf6e50ef9d27551cf4968bd166676aec" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.390821 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-7w666" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.390819 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-ct7v4" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.430407 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.438363 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-ct7v4"] Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.647943 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:46 crc kubenswrapper[4813]: E0129 16:53:46.648385 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346c46a8-5fe9-44d0-882a-6ef6412e6e0d" containerName="keystone-db-sync" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648406 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="346c46a8-5fe9-44d0-882a-6ef6412e6e0d" containerName="keystone-db-sync" Jan 29 16:53:46 crc kubenswrapper[4813]: E0129 16:53:46.648420 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="init" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648428 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="init" Jan 29 16:53:46 crc kubenswrapper[4813]: E0129 16:53:46.648438 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648444 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: E0129 16:53:46.648467 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="init" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648473 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="init" Jan 29 16:53:46 crc kubenswrapper[4813]: E0129 16:53:46.648486 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648491 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648649 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="346c46a8-5fe9-44d0-882a-6ef6412e6e0d" containerName="keystone-db-sync" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648663 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.648676 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" containerName="dnsmasq-dns" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.649608 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.662701 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.734256 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mdj8p"] Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.740664 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.744608 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.744796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.745311 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l2qv9" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.745473 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.746123 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.750128 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdj8p"] Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.819966 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820420 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820499 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820531 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mwd\" (UniqueName: \"kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820590 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820612 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820641 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820659 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820677 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwzg4\" (UniqueName: \"kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.820779 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923303 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923384 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923477 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mwd\" (UniqueName: \"kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923526 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923546 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923570 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.923607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwzg4\" (UniqueName: \"kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.925207 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.925896 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.926605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.926878 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.938055 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.940996 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.941404 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.942934 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.956812 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.957240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.972011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwzg4\" (UniqueName: \"kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4\") pod \"dnsmasq-dns-5fdbfbc95f-kwppx\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:46 crc kubenswrapper[4813]: I0129 16:53:46.975677 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.009836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mwd\" (UniqueName: \"kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd\") pod \"keystone-bootstrap-mdj8p\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.071944 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.111576 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.120876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.128351 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.133798 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.164403 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.201021 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mwds2"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.206907 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.215096 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.215293 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-frg6g" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.215431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.237951 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238167 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238293 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238422 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238476 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238515 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lznv\" (UniqueName: \"kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.238550 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.250195 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwds2"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.331313 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-72hcg"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.332508 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.341684 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.341863 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ht4s5" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342228 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342294 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342324 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lznv\" (UniqueName: \"kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342352 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342386 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342455 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342552 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rlh\" (UniqueName: \"kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342614 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342642 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342669 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.342725 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.347100 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.347339 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.352371 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-72hcg"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.352992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.367973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.373426 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.379151 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.399022 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lznv\" (UniqueName: \"kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv\") pod \"ceilometer-0\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.403228 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4tc6w"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.404668 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.412989 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.413208 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.413328 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9mxwt" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.443131 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-6l7qs"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444355 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444710 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444754 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444811 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444842 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rlh\" (UniqueName: \"kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444896 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444923 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.444958 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fjx\" (UniqueName: \"kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.445523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.455601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.455828 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.455966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.457029 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.463138 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.463612 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-m8mxr" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.463789 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.463939 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.488979 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rlh\" (UniqueName: \"kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh\") pod \"cinder-db-sync-mwds2\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.489721 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.526490 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4tc6w"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.547932 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.547998 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548035 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548101 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548187 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvd4r\" (UniqueName: \"kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548317 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548403 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548435 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548494 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fjx\" (UniqueName: \"kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.548654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq8n4\" (UniqueName: \"kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.555894 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.556482 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6l7qs"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.557240 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.570398 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.592852 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fjx\" (UniqueName: \"kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx\") pod \"barbican-db-sync-72hcg\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.605962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwds2" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.619233 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.622007 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.658787 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660479 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660644 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660671 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvd4r\" (UniqueName: \"kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660768 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.660991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.661042 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq8n4\" (UniqueName: \"kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.664686 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.671534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.671873 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.680050 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq8n4\" (UniqueName: \"kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.683731 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.686040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvd4r\" (UniqueName: \"kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.693614 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config\") pod \"neutron-db-sync-4tc6w\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.694043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts\") pod \"placement-db-sync-6l7qs\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762201 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzs6\" (UniqueName: \"kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762350 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.762484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.836598 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-72hcg" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.863962 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.864014 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.864045 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.864099 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.864301 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzs6\" (UniqueName: \"kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.864326 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.865196 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.865229 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.865994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.871711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.872613 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.882358 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.917145 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6l7qs" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.923052 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzs6\" (UniqueName: \"kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6\") pod \"dnsmasq-dns-6f6f8cb849-9fd9p\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.946160 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.947786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.948086 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.956936 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.957132 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ktqxj" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.957364 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.957592 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:53:47 crc kubenswrapper[4813]: I0129 16:53:47.991247 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.032545 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069070 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj87n\" (UniqueName: \"kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069434 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069460 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069513 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.069584 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.095463 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.097527 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.104409 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.129435 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.136786 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.150263 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mdj8p"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.170898 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171083 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171179 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171294 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171399 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171478 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171565 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171659 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171851 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmzb\" (UniqueName: \"kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.171957 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj87n\" (UniqueName: \"kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.172101 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.173244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.173358 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.173448 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.173516 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.178181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.178633 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.179396 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.178910 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.179644 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.187893 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.188912 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.178885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.211469 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.232054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj87n\" (UniqueName: \"kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n\") pod \"glance-default-external-api-0\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.276574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.279093 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.279685 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.282188 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.282866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmmzb\" (UniqueName: \"kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.281574 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.287040 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.287098 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.287189 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.287427 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.287805 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c75534b-dabc-4df6-bd87-515d6ec3e73d" path="/var/lib/kubelet/pods/6c75534b-dabc-4df6-bd87-515d6ec3e73d/volumes" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.290485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.295933 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.314897 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.317753 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.318057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.318323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmmzb\" (UniqueName: \"kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.322468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.345385 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.411248 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwds2"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.438864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdj8p" event={"ID":"03ced5e2-5961-4c26-b918-1f7dfd02e734","Type":"ContainerStarted","Data":"e4c2e0fb21d61a364759a04c1757945c8bb66973469a1ac47eeeb37631b01155"} Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.443737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2641bf11-eabe-482d-90da-30c64479ae22","Type":"ContainerStarted","Data":"40168312b2787874f1fa7a36e74919a432dbde9769dda3bcf19a7a42757ff22b"} Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.447322 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwds2" event={"ID":"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6","Type":"ContainerStarted","Data":"2add7b406209ef570d3983a120196acc30660abba6c14eac6269f2a50b90583f"} Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.449194 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" event={"ID":"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13","Type":"ContainerStarted","Data":"dbc5cf05ad7d3b7dc45047ef3b281b646c3e882ea7d769fcb2a9124a0d854b21"} Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.511624 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.562899 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-72hcg"] Jan 29 16:53:48 crc kubenswrapper[4813]: W0129 16:53:48.576158 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d3644d_4332_4c4e_a354_11aa4588e143.slice/crio-14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4 WatchSource:0}: Error finding container 14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4: Status 404 returned error can't find the container with id 14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4 Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.723714 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4tc6w"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.750195 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-6l7qs"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.778915 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:53:48 crc kubenswrapper[4813]: I0129 16:53:48.998720 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:53:49 crc kubenswrapper[4813]: W0129 16:53:49.032798 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9ec3ba9_a6df_4b0a_93ca_4f56fa0765a7.slice/crio-4c6550f865e8893efb2329c69681f06e705e95730b6f789e689f25172eab7651 WatchSource:0}: Error finding container 4c6550f865e8893efb2329c69681f06e705e95730b6f789e689f25172eab7651: Status 404 returned error can't find the container with id 4c6550f865e8893efb2329c69681f06e705e95730b6f789e689f25172eab7651 Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.164084 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:53:49 crc kubenswrapper[4813]: W0129 16:53:49.203493 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf661563e_cd58_4882_94c2_c2270826a65a.slice/crio-33812885cab6af5c942e1abc30b2427e46981b16c7ba9138709e6f276b999b24 WatchSource:0}: Error finding container 33812885cab6af5c942e1abc30b2427e46981b16c7ba9138709e6f276b999b24: Status 404 returned error can't find the container with id 33812885cab6af5c942e1abc30b2427e46981b16c7ba9138709e6f276b999b24 Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.493046 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4tc6w" event={"ID":"1bf146e2-d6a8-4c27-a3f5-ec80641f6017","Type":"ContainerStarted","Data":"1d6227c2caf355a9f0fbf758a68ba38abb2b953c39c33eb99281e58b465e18ff"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.493096 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4tc6w" event={"ID":"1bf146e2-d6a8-4c27-a3f5-ec80641f6017","Type":"ContainerStarted","Data":"1f908de25adc5699d3348704d9efcf231680e54f6c2380a10b1a07b221716859"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.496674 4813 generic.go:334] "Generic (PLEG): container finished" podID="dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" containerID="388e9f573018b6c37872976a7955819aa18626167090163d80a7edc3503ff5c5" exitCode=0 Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.496747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" event={"ID":"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13","Type":"ContainerDied","Data":"388e9f573018b6c37872976a7955819aa18626167090163d80a7edc3503ff5c5"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.502073 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerStarted","Data":"33812885cab6af5c942e1abc30b2427e46981b16c7ba9138709e6f276b999b24"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.521351 4813 generic.go:334] "Generic (PLEG): container finished" podID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerID="a6948d92b7dcb872b5c7356b2e950eba1f719743fab090e54cc404d2863fbd48" exitCode=0 Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.521554 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" event={"ID":"04dc22f3-d546-420e-96d0-103b9b9607d5","Type":"ContainerDied","Data":"a6948d92b7dcb872b5c7356b2e950eba1f719743fab090e54cc404d2863fbd48"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.521583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" event={"ID":"04dc22f3-d546-420e-96d0-103b9b9607d5","Type":"ContainerStarted","Data":"160d8c3cbc114f3eb00e7af6f78aa268ae6f109ea014daad77c651d0d784641f"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.530305 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdj8p" event={"ID":"03ced5e2-5961-4c26-b918-1f7dfd02e734","Type":"ContainerStarted","Data":"95e1ed9b55c99c7e2bc8ecb98831d34a4e578878550c94782b4a588796b0ec9e"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.531316 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4tc6w" podStartSLOduration=2.5313023919999997 podStartE2EDuration="2.531302392s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:49.515554868 +0000 UTC m=+1482.002758094" watchObservedRunningTime="2026-01-29 16:53:49.531302392 +0000 UTC m=+1482.018505618" Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.537104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerStarted","Data":"4c6550f865e8893efb2329c69681f06e705e95730b6f789e689f25172eab7651"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.540986 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-72hcg" event={"ID":"a9d3644d-4332-4c4e-a354-11aa4588e143","Type":"ContainerStarted","Data":"14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.542927 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6l7qs" event={"ID":"db7c43bd-490e-4742-8a72-ed0687faf4dd","Type":"ContainerStarted","Data":"cc196734c95a3f30e6d6acd67b467ff588ff9d5b0572ae243455ea2254d2b67f"} Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.648214 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mdj8p" podStartSLOduration=3.6481684899999998 podStartE2EDuration="3.64816849s" podCreationTimestamp="2026-01-29 16:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:49.637844109 +0000 UTC m=+1482.125047335" watchObservedRunningTime="2026-01-29 16:53:49.64816849 +0000 UTC m=+1482.135371726" Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.920275 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:53:49 crc kubenswrapper[4813]: I0129 16:53:49.961222 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.052201 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.083276 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.159829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.160387 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.160528 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.160610 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwzg4\" (UniqueName: \"kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.160637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.160691 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0\") pod \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\" (UID: \"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13\") " Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.168656 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4" (OuterVolumeSpecName: "kube-api-access-rwzg4") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "kube-api-access-rwzg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.211937 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config" (OuterVolumeSpecName: "config") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.221921 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.234426 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.278615 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.298605 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.298937 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwzg4\" (UniqueName: \"kubernetes.io/projected/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-kube-api-access-rwzg4\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.299018 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.299139 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.299215 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.368772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" (UID: "dcd3cba6-778d-4355-b6fa-e5f0eb05ee13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.400964 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.557412 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerStarted","Data":"66e56c97d3fac86757b79d0f7f688f39ccdec446fb3d4e62856fde34f1e7220c"} Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.563050 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" event={"ID":"dcd3cba6-778d-4355-b6fa-e5f0eb05ee13","Type":"ContainerDied","Data":"dbc5cf05ad7d3b7dc45047ef3b281b646c3e882ea7d769fcb2a9124a0d854b21"} Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.563101 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-kwppx" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.563195 4813 scope.go:117] "RemoveContainer" containerID="388e9f573018b6c37872976a7955819aa18626167090163d80a7edc3503ff5c5" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.578442 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" event={"ID":"04dc22f3-d546-420e-96d0-103b9b9607d5","Type":"ContainerStarted","Data":"b2f039f37c8b1bc651f0a8f050187759359507ddf84edbd6a349b6f625f8453a"} Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.578481 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.645661 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.659971 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-kwppx"] Jan 29 16:53:50 crc kubenswrapper[4813]: I0129 16:53:50.664148 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" podStartSLOduration=3.664126152 podStartE2EDuration="3.664126152s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:50.648741127 +0000 UTC m=+1483.135944343" watchObservedRunningTime="2026-01-29 16:53:50.664126152 +0000 UTC m=+1483.151329368" Jan 29 16:53:51 crc kubenswrapper[4813]: I0129 16:53:51.590217 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerStarted","Data":"c27197c75e34a046a2c1ee209558c29cdaffa7d11c9e7da2820d57409075f85d"} Jan 29 16:53:52 crc kubenswrapper[4813]: I0129 16:53:52.253295 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" path="/var/lib/kubelet/pods/dcd3cba6-778d-4355-b6fa-e5f0eb05ee13/volumes" Jan 29 16:53:52 crc kubenswrapper[4813]: I0129 16:53:52.600609 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerStarted","Data":"bd5ee678c6d9e49f2c23f790343a80a49355a0dd9c4596319e4293f9be6eaf7f"} Jan 29 16:53:53 crc kubenswrapper[4813]: I0129 16:53:53.609135 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-log" containerID="cri-o://66e56c97d3fac86757b79d0f7f688f39ccdec446fb3d4e62856fde34f1e7220c" gracePeriod=30 Jan 29 16:53:53 crc kubenswrapper[4813]: I0129 16:53:53.609249 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-httpd" containerID="cri-o://bd5ee678c6d9e49f2c23f790343a80a49355a0dd9c4596319e4293f9be6eaf7f" gracePeriod=30 Jan 29 16:53:54 crc kubenswrapper[4813]: I0129 16:53:53.642532 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.642506003 podStartE2EDuration="7.642506003s" podCreationTimestamp="2026-01-29 16:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:53.632039228 +0000 UTC m=+1486.119242444" watchObservedRunningTime="2026-01-29 16:53:53.642506003 +0000 UTC m=+1486.129709249" Jan 29 16:53:54 crc kubenswrapper[4813]: I0129 16:53:54.620442 4813 generic.go:334] "Generic (PLEG): container finished" podID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerID="66e56c97d3fac86757b79d0f7f688f39ccdec446fb3d4e62856fde34f1e7220c" exitCode=143 Jan 29 16:53:54 crc kubenswrapper[4813]: I0129 16:53:54.620515 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerDied","Data":"66e56c97d3fac86757b79d0f7f688f39ccdec446fb3d4e62856fde34f1e7220c"} Jan 29 16:53:55 crc kubenswrapper[4813]: I0129 16:53:55.631299 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerStarted","Data":"6fcafd07df42eadabcf09ee5f2d0d1eaf89e33becf6ad4980fc293563804bcd7"} Jan 29 16:53:56 crc kubenswrapper[4813]: I0129 16:53:56.644165 4813 generic.go:334] "Generic (PLEG): container finished" podID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerID="bd5ee678c6d9e49f2c23f790343a80a49355a0dd9c4596319e4293f9be6eaf7f" exitCode=0 Jan 29 16:53:56 crc kubenswrapper[4813]: I0129 16:53:56.644225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerDied","Data":"bd5ee678c6d9e49f2c23f790343a80a49355a0dd9c4596319e4293f9be6eaf7f"} Jan 29 16:53:57 crc kubenswrapper[4813]: I0129 16:53:57.956269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:53:58 crc kubenswrapper[4813]: I0129 16:53:58.037645 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:53:58 crc kubenswrapper[4813]: I0129 16:53:58.037983 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" containerID="cri-o://c36e3f815fe5757c306487f5bb4e69ca1c9e3e2b468d633fb7a3705270397f42" gracePeriod=10 Jan 29 16:53:59 crc kubenswrapper[4813]: I0129 16:53:59.670749 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-log" containerID="cri-o://c27197c75e34a046a2c1ee209558c29cdaffa7d11c9e7da2820d57409075f85d" gracePeriod=30 Jan 29 16:53:59 crc kubenswrapper[4813]: I0129 16:53:59.670804 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-httpd" containerID="cri-o://6fcafd07df42eadabcf09ee5f2d0d1eaf89e33becf6ad4980fc293563804bcd7" gracePeriod=30 Jan 29 16:53:59 crc kubenswrapper[4813]: I0129 16:53:59.706674 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.706649127 podStartE2EDuration="12.706649127s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:53:59.691571941 +0000 UTC m=+1492.178775157" watchObservedRunningTime="2026-01-29 16:53:59.706649127 +0000 UTC m=+1492.193852343" Jan 29 16:53:59 crc kubenswrapper[4813]: I0129 16:53:59.907488 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.240100 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.240190 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.682408 4813 generic.go:334] "Generic (PLEG): container finished" podID="abf498ff-9c45-421e-9db3-f114936b22e8" containerID="c36e3f815fe5757c306487f5bb4e69ca1c9e3e2b468d633fb7a3705270397f42" exitCode=0 Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.682494 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" event={"ID":"abf498ff-9c45-421e-9db3-f114936b22e8","Type":"ContainerDied","Data":"c36e3f815fe5757c306487f5bb4e69ca1c9e3e2b468d633fb7a3705270397f42"} Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.687445 4813 generic.go:334] "Generic (PLEG): container finished" podID="f661563e-cd58-4882-94c2-c2270826a65a" containerID="c27197c75e34a046a2c1ee209558c29cdaffa7d11c9e7da2820d57409075f85d" exitCode=143 Jan 29 16:54:00 crc kubenswrapper[4813]: I0129 16:54:00.687498 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerDied","Data":"c27197c75e34a046a2c1ee209558c29cdaffa7d11c9e7da2820d57409075f85d"} Jan 29 16:54:01 crc kubenswrapper[4813]: I0129 16:54:01.697435 4813 generic.go:334] "Generic (PLEG): container finished" podID="f661563e-cd58-4882-94c2-c2270826a65a" containerID="6fcafd07df42eadabcf09ee5f2d0d1eaf89e33becf6ad4980fc293563804bcd7" exitCode=0 Jan 29 16:54:01 crc kubenswrapper[4813]: I0129 16:54:01.697516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerDied","Data":"6fcafd07df42eadabcf09ee5f2d0d1eaf89e33becf6ad4980fc293563804bcd7"} Jan 29 16:54:04 crc kubenswrapper[4813]: I0129 16:54:04.907406 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 16:54:06 crc kubenswrapper[4813]: I0129 16:54:06.351858 4813 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod1fcb63b6-58c0-4f25-b300-a569d79e6815"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod1fcb63b6-58c0-4f25-b300-a569d79e6815] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1fcb63b6_58c0_4f25_b300_a569d79e6815.slice" Jan 29 16:54:06 crc kubenswrapper[4813]: E0129 16:54:06.352368 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod1fcb63b6-58c0-4f25-b300-a569d79e6815] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod1fcb63b6-58c0-4f25-b300-a569d79e6815] : Timed out while waiting for systemd to remove kubepods-besteffort-pod1fcb63b6_58c0_4f25_b300_a569d79e6815.slice" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" Jan 29 16:54:06 crc kubenswrapper[4813]: I0129 16:54:06.735548 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-vdpbq" Jan 29 16:54:06 crc kubenswrapper[4813]: I0129 16:54:06.766650 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:54:06 crc kubenswrapper[4813]: I0129 16:54:06.775823 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-vdpbq"] Jan 29 16:54:08 crc kubenswrapper[4813]: I0129 16:54:08.251238 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fcb63b6-58c0-4f25-b300-a569d79e6815" path="/var/lib/kubelet/pods/1fcb63b6-58c0-4f25-b300-a569d79e6815/volumes" Jan 29 16:54:09 crc kubenswrapper[4813]: I0129 16:54:09.907528 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 16:54:09 crc kubenswrapper[4813]: I0129 16:54:09.907955 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:54:14 crc kubenswrapper[4813]: E0129 16:54:14.069789 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777" Jan 29 16:54:14 crc kubenswrapper[4813]: E0129 16:54:14.070451 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66bh5b5h8bh79h59chf8h5f7h5dbh558hf6hdbhdfh674h66fh97h5ddh56bh67ch56dh5c4h576h569h9ch55h54dh56hd7h5b9h585h688h5dh5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lznv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2641bf11-eabe-482d-90da-30c64479ae22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:14 crc kubenswrapper[4813]: I0129 16:54:14.907730 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.701235 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.764871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.765175 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.765216 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.765242 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj87n\" (UniqueName: \"kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.765625 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs" (OuterVolumeSpecName: "logs") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.784875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts" (OuterVolumeSpecName: "scripts") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.784933 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n" (OuterVolumeSpecName: "kube-api-access-nj87n") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "kube-api-access-nj87n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.813275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data" (OuterVolumeSpecName: "config-data") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.825808 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7","Type":"ContainerDied","Data":"4c6550f865e8893efb2329c69681f06e705e95730b6f789e689f25172eab7651"} Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.825865 4813 scope.go:117] "RemoveContainer" containerID="bd5ee678c6d9e49f2c23f790343a80a49355a0dd9c4596319e4293f9be6eaf7f" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.826002 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.866418 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.866534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.866593 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.866672 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs\") pod \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\" (UID: \"c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7\") " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.867022 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.867034 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.867044 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.867052 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj87n\" (UniqueName: \"kubernetes.io/projected/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-kube-api-access-nj87n\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.867894 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.870779 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.903092 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.930268 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" (UID: "c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.968941 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.969006 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.969017 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.969027 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:16 crc kubenswrapper[4813]: I0129 16:54:16.986641 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.071158 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.170713 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.190970 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.215526 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:54:17 crc kubenswrapper[4813]: E0129 16:54:17.215965 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" containerName="init" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.215986 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" containerName="init" Jan 29 16:54:17 crc kubenswrapper[4813]: E0129 16:54:17.216005 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-log" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.216016 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-log" Jan 29 16:54:17 crc kubenswrapper[4813]: E0129 16:54:17.216050 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-httpd" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.216059 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-httpd" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.216355 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-httpd" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.216425 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd3cba6-778d-4355-b6fa-e5f0eb05ee13" containerName="init" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.216443 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" containerName="glance-log" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.217533 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.222970 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.223298 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.226296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376471 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376567 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376620 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsww\" (UniqueName: \"kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376664 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376689 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.376893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478589 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsww\" (UniqueName: \"kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478743 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478825 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478854 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478895 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478934 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.478955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.479144 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.479249 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.479491 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.487489 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.495966 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.496190 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.496711 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.524065 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsww\" (UniqueName: \"kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.595540 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " pod="openstack/glance-default-external-api-0" Jan 29 16:54:17 crc kubenswrapper[4813]: I0129 16:54:17.838762 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:54:18 crc kubenswrapper[4813]: I0129 16:54:18.256918 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7" path="/var/lib/kubelet/pods/c9ec3ba9-a6df-4b0a-93ca-4f56fa0765a7/volumes" Jan 29 16:54:18 crc kubenswrapper[4813]: I0129 16:54:18.513506 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:54:18 crc kubenswrapper[4813]: I0129 16:54:18.513570 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:54:24 crc kubenswrapper[4813]: I0129 16:54:24.907723 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:27 crc kubenswrapper[4813]: E0129 16:54:27.980426 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 29 16:54:27 crc kubenswrapper[4813]: E0129 16:54:27.981303 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gq8n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-6l7qs_openstack(db7c43bd-490e-4742-8a72-ed0687faf4dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:54:27 crc kubenswrapper[4813]: E0129 16:54:27.982507 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-6l7qs" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" Jan 29 16:54:28 crc kubenswrapper[4813]: E0129 16:54:28.936767 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-6l7qs" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" Jan 29 16:54:29 crc kubenswrapper[4813]: I0129 16:54:29.909020 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:30 crc kubenswrapper[4813]: I0129 16:54:30.240171 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:54:30 crc kubenswrapper[4813]: I0129 16:54:30.240285 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:54:30 crc kubenswrapper[4813]: I0129 16:54:30.253216 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 16:54:30 crc kubenswrapper[4813]: I0129 16:54:30.254333 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:54:30 crc kubenswrapper[4813]: I0129 16:54:30.254422 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" gracePeriod=600 Jan 29 16:54:32 crc kubenswrapper[4813]: I0129 16:54:32.943261 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:54:32 crc kubenswrapper[4813]: I0129 16:54:32.945301 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:32 crc kubenswrapper[4813]: I0129 16:54:32.952377 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.082737 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.083193 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgv86\" (UniqueName: \"kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.083356 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.185226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.185345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgv86\" (UniqueName: \"kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.185383 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.186050 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.186521 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.210630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgv86\" (UniqueName: \"kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86\") pod \"redhat-operators-gz589\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.279068 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.983573 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" exitCode=0 Jan 29 16:54:33 crc kubenswrapper[4813]: I0129 16:54:33.983628 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd"} Jan 29 16:54:34 crc kubenswrapper[4813]: I0129 16:54:34.910246 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:39 crc kubenswrapper[4813]: I0129 16:54:39.911856 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.943569 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.946300 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.951485 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6w4\" (UniqueName: \"kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.951583 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.951615 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:40 crc kubenswrapper[4813]: I0129 16:54:40.964437 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.053142 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.053323 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh6w4\" (UniqueName: \"kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.053417 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.053699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.053798 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.084156 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh6w4\" (UniqueName: \"kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4\") pod \"redhat-marketplace-pcg84\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:41 crc kubenswrapper[4813]: I0129 16:54:41.265181 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:54:44 crc kubenswrapper[4813]: I0129 16:54:44.913531 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:49 crc kubenswrapper[4813]: I0129 16:54:49.914466 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:54 crc kubenswrapper[4813]: I0129 16:54:54.916273 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:54:59 crc kubenswrapper[4813]: I0129 16:54:59.917737 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:55:04 crc kubenswrapper[4813]: I0129 16:55:04.919279 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:55:09 crc kubenswrapper[4813]: I0129 16:55:09.921626 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:55:12 crc kubenswrapper[4813]: I0129 16:55:12.305902 4813 generic.go:334] "Generic (PLEG): container finished" podID="03ced5e2-5961-4c26-b918-1f7dfd02e734" containerID="95e1ed9b55c99c7e2bc8ecb98831d34a4e578878550c94782b4a588796b0ec9e" exitCode=0 Jan 29 16:55:12 crc kubenswrapper[4813]: I0129 16:55:12.305999 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdj8p" event={"ID":"03ced5e2-5961-4c26-b918-1f7dfd02e734","Type":"ContainerDied","Data":"95e1ed9b55c99c7e2bc8ecb98831d34a4e578878550c94782b4a588796b0ec9e"} Jan 29 16:55:14 crc kubenswrapper[4813]: I0129 16:55:14.922173 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.796199 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.805686 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.820162 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875056 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875361 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.875441 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm68v\" (UniqueName: \"kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v\") pod \"abf498ff-9c45-421e-9db3-f114936b22e8\" (UID: \"abf498ff-9c45-421e-9db3-f114936b22e8\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.883269 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v" (OuterVolumeSpecName: "kube-api-access-nm68v") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "kube-api-access-nm68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.919172 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config" (OuterVolumeSpecName: "config") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.929570 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.934538 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.937782 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.942086 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abf498ff-9c45-421e-9db3-f114936b22e8" (UID: "abf498ff-9c45-421e-9db3-f114936b22e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: E0129 16:55:18.943265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977012 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977061 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977141 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977157 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977194 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977231 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4mwd\" (UniqueName: \"kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977255 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977276 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977379 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts\") pod \"03ced5e2-5961-4c26-b918-1f7dfd02e734\" (UID: \"03ced5e2-5961-4c26-b918-1f7dfd02e734\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmmzb\" (UniqueName: \"kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb\") pod \"f661563e-cd58-4882-94c2-c2270826a65a\" (UID: \"f661563e-cd58-4882-94c2-c2270826a65a\") " Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977763 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977774 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977784 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977793 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977801 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abf498ff-9c45-421e-9db3-f114936b22e8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.977810 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm68v\" (UniqueName: \"kubernetes.io/projected/abf498ff-9c45-421e-9db3-f114936b22e8-kube-api-access-nm68v\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.978886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs" (OuterVolumeSpecName: "logs") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.979921 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.981545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd" (OuterVolumeSpecName: "kube-api-access-x4mwd") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "kube-api-access-x4mwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.984293 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.984313 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.984444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts" (OuterVolumeSpecName: "scripts") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.984712 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb" (OuterVolumeSpecName: "kube-api-access-gmmzb") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "kube-api-access-gmmzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.984712 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts" (OuterVolumeSpecName: "scripts") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:18 crc kubenswrapper[4813]: I0129 16:55:18.986036 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.018815 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.018965 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5fjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-72hcg_openstack(a9d3644d-4332-4c4e-a354-11aa4588e143): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.020259 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.020344 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-72hcg" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.022243 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data" (OuterVolumeSpecName: "config-data") pod "03ced5e2-5961-4c26-b918-1f7dfd02e734" (UID: "03ced5e2-5961-4c26-b918-1f7dfd02e734"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.027842 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.042259 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.042386 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data" (OuterVolumeSpecName: "config-data") pod "f661563e-cd58-4882-94c2-c2270826a65a" (UID: "f661563e-cd58-4882-94c2-c2270826a65a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079672 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079710 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079721 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079729 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079737 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmmzb\" (UniqueName: \"kubernetes.io/projected/f661563e-cd58-4882-94c2-c2270826a65a-kube-api-access-gmmzb\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079748 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079755 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079763 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f661563e-cd58-4882-94c2-c2270826a65a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079772 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079780 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03ced5e2-5961-4c26-b918-1f7dfd02e734-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079815 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079832 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4mwd\" (UniqueName: \"kubernetes.io/projected/03ced5e2-5961-4c26-b918-1f7dfd02e734-kube-api-access-x4mwd\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079840 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.079848 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f661563e-cd58-4882-94c2-c2270826a65a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.094671 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.181752 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.402015 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.402083 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f661563e-cd58-4882-94c2-c2270826a65a","Type":"ContainerDied","Data":"33812885cab6af5c942e1abc30b2427e46981b16c7ba9138709e6f276b999b24"} Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.405389 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mdj8p" event={"ID":"03ced5e2-5961-4c26-b918-1f7dfd02e734","Type":"ContainerDied","Data":"e4c2e0fb21d61a364759a04c1757945c8bb66973469a1ac47eeeb37631b01155"} Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.405426 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4c2e0fb21d61a364759a04c1757945c8bb66973469a1ac47eeeb37631b01155" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.405476 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mdj8p" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.421453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" event={"ID":"abf498ff-9c45-421e-9db3-f114936b22e8","Type":"ContainerDied","Data":"199f3f7af77b10a2cfbdea72d60b2b3ca988b1141f75564ba6c39d3b22da9a14"} Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.421610 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.422736 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.423135 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.423896 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-72hcg" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.486408 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.504991 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.514456 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.523948 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-mczvv"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531503 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.531865 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="init" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531890 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="init" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.531909 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-httpd" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531915 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-httpd" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.531923 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531929 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.531946 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-log" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531951 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-log" Jan 29 16:55:19 crc kubenswrapper[4813]: E0129 16:55:19.531966 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ced5e2-5961-4c26-b918-1f7dfd02e734" containerName="keystone-bootstrap" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.531972 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ced5e2-5961-4c26-b918-1f7dfd02e734" containerName="keystone-bootstrap" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.532175 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ced5e2-5961-4c26-b918-1f7dfd02e734" containerName="keystone-bootstrap" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.532190 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.532202 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-httpd" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.532211 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f661563e-cd58-4882-94c2-c2270826a65a" containerName="glance-log" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.533417 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.536042 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.537171 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.550877 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.689868 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.689939 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690042 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690211 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44898\" (UniqueName: \"kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.690348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791648 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791705 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791761 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44898\" (UniqueName: \"kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791784 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.791935 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.792090 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.793131 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.793683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.797363 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.799161 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.799737 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.811526 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.816248 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44898\" (UniqueName: \"kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.828600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.858937 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.929014 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-mczvv" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.939258 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mdj8p"] Jan 29 16:55:19 crc kubenswrapper[4813]: I0129 16:55:19.953897 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mdj8p"] Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.039835 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b6pbr"] Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.041098 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.043777 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.044023 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.044976 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.045272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.050305 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l2qv9" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.054489 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b6pbr"] Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099264 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099371 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099434 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j56m\" (UniqueName: \"kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099490 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099540 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.099562 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.201243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.201359 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j56m\" (UniqueName: \"kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.201408 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.201446 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.201608 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.202167 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.206419 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.207243 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.207355 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.207625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.207892 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.217683 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j56m\" (UniqueName: \"kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m\") pod \"keystone-bootstrap-b6pbr\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.252068 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ced5e2-5961-4c26-b918-1f7dfd02e734" path="/var/lib/kubelet/pods/03ced5e2-5961-4c26-b918-1f7dfd02e734/volumes" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.252779 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf498ff-9c45-421e-9db3-f114936b22e8" path="/var/lib/kubelet/pods/abf498ff-9c45-421e-9db3-f114936b22e8/volumes" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.253531 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f661563e-cd58-4882-94c2-c2270826a65a" path="/var/lib/kubelet/pods/f661563e-cd58-4882-94c2-c2270826a65a/volumes" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.367812 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:20 crc kubenswrapper[4813]: I0129 16:55:20.640641 4813 scope.go:117] "RemoveContainer" containerID="66e56c97d3fac86757b79d0f7f688f39ccdec446fb3d4e62856fde34f1e7220c" Jan 29 16:55:20 crc kubenswrapper[4813]: E0129 16:55:20.820354 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 29 16:55:20 crc kubenswrapper[4813]: E0129 16:55:20.820593 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4rlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-mwds2_openstack(944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:55:20 crc kubenswrapper[4813]: E0129 16:55:20.824376 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-mwds2" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" Jan 29 16:55:21 crc kubenswrapper[4813]: E0129 16:55:21.295860 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67" Jan 29 16:55:21 crc kubenswrapper[4813]: E0129 16:55:21.296332 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66bh5b5h8bh79h59chf8h5f7h5dbh558hf6hdbhdfh674h66fh97h5ddh56bh67ch56dh5c4h576h569h9ch55h54dh56hd7h5b9h585h688h5dh5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lznv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2641bf11-eabe-482d-90da-30c64479ae22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:55:21 crc kubenswrapper[4813]: E0129 16:55:21.839470 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-mwds2" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.049660 4813 scope.go:117] "RemoveContainer" containerID="8987fbd6eab75cb8c8d4b0dc3c9cd4584d6a2ba36fcbf1141525385eea963b7d" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.242990 4813 scope.go:117] "RemoveContainer" containerID="6fcafd07df42eadabcf09ee5f2d0d1eaf89e33becf6ad4980fc293563804bcd7" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.283312 4813 scope.go:117] "RemoveContainer" containerID="c27197c75e34a046a2c1ee209558c29cdaffa7d11c9e7da2820d57409075f85d" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.366384 4813 scope.go:117] "RemoveContainer" containerID="c36e3f815fe5757c306487f5bb4e69ca1c9e3e2b468d633fb7a3705270397f42" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.395322 4813 scope.go:117] "RemoveContainer" containerID="5e10863fe4481e09897751870530afb49988df0d247f791f62ba2296c587cffb" Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.403054 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:55:22 crc kubenswrapper[4813]: W0129 16:55:22.413098 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2309ea66_f028_4c72_b6e6_009762934e48.slice/crio-37f1f478fd16083ecacc92d2319d3fff02e050377980af803ac00bc6d2c6e8bf WatchSource:0}: Error finding container 37f1f478fd16083ecacc92d2319d3fff02e050377980af803ac00bc6d2c6e8bf: Status 404 returned error can't find the container with id 37f1f478fd16083ecacc92d2319d3fff02e050377980af803ac00bc6d2c6e8bf Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.451858 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6l7qs" event={"ID":"db7c43bd-490e-4742-8a72-ed0687faf4dd","Type":"ContainerStarted","Data":"7a62641b8e0956ea67aaca50700652a0aaf86c4bf3d1382bbda5d5d373dbf256"} Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.471180 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerStarted","Data":"37f1f478fd16083ecacc92d2319d3fff02e050377980af803ac00bc6d2c6e8bf"} Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.547461 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.641343 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b6pbr"] Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.662472 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:55:22 crc kubenswrapper[4813]: W0129 16:55:22.670215 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod941d07ef_0203_4f55_a276_cb2216375048.slice/crio-76c6bfe1026a8a31c2cf5a99567d55e7e1edcea25141ea2b0d9de019623d717b WatchSource:0}: Error finding container 76c6bfe1026a8a31c2cf5a99567d55e7e1edcea25141ea2b0d9de019623d717b: Status 404 returned error can't find the container with id 76c6bfe1026a8a31c2cf5a99567d55e7e1edcea25141ea2b0d9de019623d717b Jan 29 16:55:22 crc kubenswrapper[4813]: I0129 16:55:22.733353 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:55:22 crc kubenswrapper[4813]: W0129 16:55:22.806029 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd606996d_c2f9_4072_b026_d7594399dd75.slice/crio-8bb9b98f340a62f10ab807d41b3a3bc47cf2a032b744d15e10e105fa11e3ec75 WatchSource:0}: Error finding container 8bb9b98f340a62f10ab807d41b3a3bc47cf2a032b744d15e10e105fa11e3ec75: Status 404 returned error can't find the container with id 8bb9b98f340a62f10ab807d41b3a3bc47cf2a032b744d15e10e105fa11e3ec75 Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.487607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b6pbr" event={"ID":"c9ab9d45-b855-4c1a-9e32-24c9d3052f52","Type":"ContainerStarted","Data":"8f7c8d0a6c71b5e838911574aca3ddbbb7a206178ccb747bdc1bf3c3d3aae782"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.488516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b6pbr" event={"ID":"c9ab9d45-b855-4c1a-9e32-24c9d3052f52","Type":"ContainerStarted","Data":"6f025aa3aadec414f99a369e57bd0c0c67e6ea80d1c55a8ce11016d46a501207"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.491648 4813 generic.go:334] "Generic (PLEG): container finished" podID="2309ea66-f028-4c72-b6e6-009762934e48" containerID="6502ab3a984dfe4241e27edccf40f94cf2b56cacf279a4075bbfada3075971a4" exitCode=0 Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.492198 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerDied","Data":"6502ab3a984dfe4241e27edccf40f94cf2b56cacf279a4075bbfada3075971a4"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.498903 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerStarted","Data":"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.498949 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerStarted","Data":"8bb9b98f340a62f10ab807d41b3a3bc47cf2a032b744d15e10e105fa11e3ec75"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.514043 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerStarted","Data":"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.514210 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerStarted","Data":"76c6bfe1026a8a31c2cf5a99567d55e7e1edcea25141ea2b0d9de019623d717b"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.533704 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerStarted","Data":"410866b4da0487fd569b786c3474b9455190956c89b7bae91ae7d9c87b59034e"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.534190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerStarted","Data":"b851be1454eb45fd4d94a3ec5bd0f4a55323ac51465567d072cf93521f6a0fd1"} Jan 29 16:55:23 crc kubenswrapper[4813]: I0129 16:55:23.581529 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-6l7qs" podStartSLOduration=3.23992127 podStartE2EDuration="1m36.581501412s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="2026-01-29 16:53:48.757087983 +0000 UTC m=+1481.244291199" lastFinishedPulling="2026-01-29 16:55:22.098668125 +0000 UTC m=+1574.585871341" observedRunningTime="2026-01-29 16:55:23.555086136 +0000 UTC m=+1576.042289382" watchObservedRunningTime="2026-01-29 16:55:23.581501412 +0000 UTC m=+1576.068704628" Jan 29 16:55:24 crc kubenswrapper[4813]: I0129 16:55:24.545575 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerStarted","Data":"d1652c697171b4644384aad69873e83fc4dc25a9c4b63c4c79eced2b478dcb88"} Jan 29 16:55:24 crc kubenswrapper[4813]: I0129 16:55:24.552364 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerStarted","Data":"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36"} Jan 29 16:55:24 crc kubenswrapper[4813]: I0129 16:55:24.555817 4813 generic.go:334] "Generic (PLEG): container finished" podID="941d07ef-0203-4f55-a276-cb2216375048" containerID="fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b" exitCode=0 Jan 29 16:55:24 crc kubenswrapper[4813]: I0129 16:55:24.555867 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerDied","Data":"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b"} Jan 29 16:55:24 crc kubenswrapper[4813]: I0129 16:55:24.583544 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b6pbr" podStartSLOduration=4.583524329 podStartE2EDuration="4.583524329s" podCreationTimestamp="2026-01-29 16:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:24.574413871 +0000 UTC m=+1577.061617097" watchObservedRunningTime="2026-01-29 16:55:24.583524329 +0000 UTC m=+1577.070727545" Jan 29 16:55:25 crc kubenswrapper[4813]: I0129 16:55:25.591923 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=68.591901505 podStartE2EDuration="1m8.591901505s" podCreationTimestamp="2026-01-29 16:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:25.5913601 +0000 UTC m=+1578.078563326" watchObservedRunningTime="2026-01-29 16:55:25.591901505 +0000 UTC m=+1578.079104721" Jan 29 16:55:25 crc kubenswrapper[4813]: I0129 16:55:25.629452 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.629430394 podStartE2EDuration="6.629430394s" podCreationTimestamp="2026-01-29 16:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:25.622019585 +0000 UTC m=+1578.109222811" watchObservedRunningTime="2026-01-29 16:55:25.629430394 +0000 UTC m=+1578.116633610" Jan 29 16:55:27 crc kubenswrapper[4813]: I0129 16:55:27.839234 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:55:27 crc kubenswrapper[4813]: I0129 16:55:27.839323 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:55:27 crc kubenswrapper[4813]: I0129 16:55:27.868742 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:55:27 crc kubenswrapper[4813]: I0129 16:55:27.877022 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:55:28 crc kubenswrapper[4813]: I0129 16:55:28.610566 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:55:28 crc kubenswrapper[4813]: I0129 16:55:28.611007 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:55:29 crc kubenswrapper[4813]: I0129 16:55:29.859265 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:29 crc kubenswrapper[4813]: I0129 16:55:29.859361 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:29 crc kubenswrapper[4813]: I0129 16:55:29.892138 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:29 crc kubenswrapper[4813]: I0129 16:55:29.906403 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:30 crc kubenswrapper[4813]: I0129 16:55:30.241857 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:55:30 crc kubenswrapper[4813]: E0129 16:55:30.242378 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:55:30 crc kubenswrapper[4813]: I0129 16:55:30.633413 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:55:30 crc kubenswrapper[4813]: I0129 16:55:30.634133 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:30 crc kubenswrapper[4813]: I0129 16:55:30.634174 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:30 crc kubenswrapper[4813]: I0129 16:55:30.972078 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:55:31 crc kubenswrapper[4813]: I0129 16:55:31.647223 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:55:31 crc kubenswrapper[4813]: I0129 16:55:31.710697 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:55:32 crc kubenswrapper[4813]: I0129 16:55:32.617367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:32 crc kubenswrapper[4813]: I0129 16:55:32.658367 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:55:32 crc kubenswrapper[4813]: I0129 16:55:32.793146 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:55:43 crc kubenswrapper[4813]: I0129 16:55:43.239705 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:55:43 crc kubenswrapper[4813]: E0129 16:55:43.242191 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:55:49 crc kubenswrapper[4813]: I0129 16:55:49.795792 4813 generic.go:334] "Generic (PLEG): container finished" podID="c9ab9d45-b855-4c1a-9e32-24c9d3052f52" containerID="8f7c8d0a6c71b5e838911574aca3ddbbb7a206178ccb747bdc1bf3c3d3aae782" exitCode=0 Jan 29 16:55:49 crc kubenswrapper[4813]: I0129 16:55:49.796038 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b6pbr" event={"ID":"c9ab9d45-b855-4c1a-9e32-24c9d3052f52","Type":"ContainerDied","Data":"8f7c8d0a6c71b5e838911574aca3ddbbb7a206178ccb747bdc1bf3c3d3aae782"} Jan 29 16:55:51 crc kubenswrapper[4813]: E0129 16:55:51.074355 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:828e2158704d4954145386c2ef8d02a98d34f9e4170fdec3cb0e6de4c955ca92" Jan 29 16:55:51 crc kubenswrapper[4813]: E0129 16:55:51.074775 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:828e2158704d4954145386c2ef8d02a98d34f9e4170fdec3cb0e6de4c955ca92,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lznv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2641bf11-eabe-482d-90da-30c64479ae22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 16:55:52 crc kubenswrapper[4813]: I0129 16:55:52.825202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b6pbr" event={"ID":"c9ab9d45-b855-4c1a-9e32-24c9d3052f52","Type":"ContainerDied","Data":"6f025aa3aadec414f99a369e57bd0c0c67e6ea80d1c55a8ce11016d46a501207"} Jan 29 16:55:52 crc kubenswrapper[4813]: I0129 16:55:52.826306 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f025aa3aadec414f99a369e57bd0c0c67e6ea80d1c55a8ce11016d46a501207" Jan 29 16:55:52 crc kubenswrapper[4813]: I0129 16:55:52.853249 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:52 crc kubenswrapper[4813]: E0129 16:55:52.924773 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:55:52 crc kubenswrapper[4813]: E0129 16:55:52.924955 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lznv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2641bf11-eabe-482d-90da-30c64479ae22): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:55:52 crc kubenswrapper[4813]: E0129 16:55:52.926182 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"]" pod="openstack/ceilometer-0" podUID="2641bf11-eabe-482d-90da-30c64479ae22" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.034724 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.034935 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.034966 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.035020 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.035044 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j56m\" (UniqueName: \"kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.035075 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys\") pod \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\" (UID: \"c9ab9d45-b855-4c1a-9e32-24c9d3052f52\") " Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.041417 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts" (OuterVolumeSpecName: "scripts") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.041571 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.042021 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m" (OuterVolumeSpecName: "kube-api-access-2j56m") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "kube-api-access-2j56m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.042162 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.063171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.063190 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data" (OuterVolumeSpecName: "config-data") pod "c9ab9d45-b855-4c1a-9e32-24c9d3052f52" (UID: "c9ab9d45-b855-4c1a-9e32-24c9d3052f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.136828 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.137369 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.137382 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j56m\" (UniqueName: \"kubernetes.io/projected/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-kube-api-access-2j56m\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.137392 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.137401 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.137409 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c9ab9d45-b855-4c1a-9e32-24c9d3052f52-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.841294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerStarted","Data":"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866"} Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.845319 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-72hcg" event={"ID":"a9d3644d-4332-4c4e-a354-11aa4588e143","Type":"ContainerStarted","Data":"f0e8e259d772767b69077be1ae02c8f19b4de52eb9c8b7a421a2f46277dd388d"} Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.848884 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b6pbr" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.848917 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerStarted","Data":"93222a8700c2161b3d005f1f614b9f80f3a3d89e51073a5d5344932c63f7329b"} Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.867620 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-72hcg" podStartSLOduration=2.323234411 podStartE2EDuration="2m6.867599591s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="2026-01-29 16:53:48.580177161 +0000 UTC m=+1481.067380377" lastFinishedPulling="2026-01-29 16:55:53.124542341 +0000 UTC m=+1605.611745557" observedRunningTime="2026-01-29 16:55:53.859027899 +0000 UTC m=+1606.346231115" watchObservedRunningTime="2026-01-29 16:55:53.867599591 +0000 UTC m=+1606.354802807" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.963221 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 16:55:53 crc kubenswrapper[4813]: E0129 16:55:53.963597 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ab9d45-b855-4c1a-9e32-24c9d3052f52" containerName="keystone-bootstrap" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.963609 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ab9d45-b855-4c1a-9e32-24c9d3052f52" containerName="keystone-bootstrap" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.969890 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ab9d45-b855-4c1a-9e32-24c9d3052f52" containerName="keystone-bootstrap" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.970593 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.976667 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.976692 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.976675 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.976937 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l2qv9" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.977085 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 16:55:53 crc kubenswrapper[4813]: I0129 16:55:53.977285 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.002969 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.158745 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.158871 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.158962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.159034 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskzk\" (UniqueName: \"kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.159098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.159160 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.159185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.159220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.175107 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.260901 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.260977 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261022 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wskzk\" (UniqueName: \"kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261095 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261133 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261164 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.261213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.266921 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.266942 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.267165 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.270462 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.270943 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.271452 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.271699 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.288722 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wskzk\" (UniqueName: \"kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk\") pod \"keystone-757f6546f5-txkdm\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.291802 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.362994 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363049 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363151 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363176 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363268 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363354 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lznv\" (UniqueName: \"kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv\") pod \"2641bf11-eabe-482d-90da-30c64479ae22\" (UID: \"2641bf11-eabe-482d-90da-30c64479ae22\") " Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.363858 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.365066 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.368375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts" (OuterVolumeSpecName: "scripts") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.368835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv" (OuterVolumeSpecName: "kube-api-access-9lznv") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "kube-api-access-9lznv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.369351 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data" (OuterVolumeSpecName: "config-data") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.373797 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.375801 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2641bf11-eabe-482d-90da-30c64479ae22" (UID: "2641bf11-eabe-482d-90da-30c64479ae22"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465803 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465827 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lznv\" (UniqueName: \"kubernetes.io/projected/2641bf11-eabe-482d-90da-30c64479ae22-kube-api-access-9lznv\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465838 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465847 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2641bf11-eabe-482d-90da-30c64479ae22-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465857 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465866 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.465877 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2641bf11-eabe-482d-90da-30c64479ae22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.767737 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 16:55:54 crc kubenswrapper[4813]: W0129 16:55:54.772554 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d04685f_adbf_45b2_a649_34f27aadc2b1.slice/crio-7d1f61b8143da518eae9e93ab030fab9ebf2d458513e0564d13411c261f72353 WatchSource:0}: Error finding container 7d1f61b8143da518eae9e93ab030fab9ebf2d458513e0564d13411c261f72353: Status 404 returned error can't find the container with id 7d1f61b8143da518eae9e93ab030fab9ebf2d458513e0564d13411c261f72353 Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.861295 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757f6546f5-txkdm" event={"ID":"5d04685f-adbf-45b2-a649-34f27aadc2b1","Type":"ContainerStarted","Data":"7d1f61b8143da518eae9e93ab030fab9ebf2d458513e0564d13411c261f72353"} Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.863267 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2641bf11-eabe-482d-90da-30c64479ae22","Type":"ContainerDied","Data":"40168312b2787874f1fa7a36e74919a432dbde9769dda3bcf19a7a42757ff22b"} Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.863401 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.876599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwds2" event={"ID":"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6","Type":"ContainerStarted","Data":"405b32419ba7c2468a5c44f5a8ea0559199efb36440983f8204fe5b78e3d66c2"} Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.920437 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-mwds2" podStartSLOduration=3.191854824 podStartE2EDuration="2m7.920407552s" podCreationTimestamp="2026-01-29 16:53:47 +0000 UTC" firstStartedPulling="2026-01-29 16:53:48.395248512 +0000 UTC m=+1480.882451728" lastFinishedPulling="2026-01-29 16:55:53.12380124 +0000 UTC m=+1605.611004456" observedRunningTime="2026-01-29 16:55:54.912653413 +0000 UTC m=+1607.399856649" watchObservedRunningTime="2026-01-29 16:55:54.920407552 +0000 UTC m=+1607.407610768" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.976737 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.983982 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.994589 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.996664 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.999072 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:55:54 crc kubenswrapper[4813]: I0129 16:55:54.999285 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.071294 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.108749 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:55 crc kubenswrapper[4813]: E0129 16:55:55.109354 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-rtnxg log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="4391d52b-5800-47c9-af5d-aed2b0ecbc59" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.179316 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtnxg\" (UniqueName: \"kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.179676 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.179878 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.180085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.180190 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.180252 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.180409 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.240205 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:55:55 crc kubenswrapper[4813]: E0129 16:55:55.240478 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtnxg\" (UniqueName: \"kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282681 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282736 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282833 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.282867 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.283919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.285741 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.289738 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.289853 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.290816 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.291191 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.308522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtnxg\" (UniqueName: \"kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg\") pod \"ceilometer-0\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.885716 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757f6546f5-txkdm" event={"ID":"5d04685f-adbf-45b2-a649-34f27aadc2b1","Type":"ContainerStarted","Data":"f6218d0edf63e4911af52e8755d9d930f8356969bd5a331cd99184e89398221d"} Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.885760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.902590 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993393 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993488 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993540 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993611 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtnxg\" (UniqueName: \"kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993771 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts\") pod \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\" (UID: \"4391d52b-5800-47c9-af5d-aed2b0ecbc59\") " Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.993908 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.994211 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.994232 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4391d52b-5800-47c9-af5d-aed2b0ecbc59-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:55 crc kubenswrapper[4813]: I0129 16:55:55.998314 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg" (OuterVolumeSpecName: "kube-api-access-rtnxg") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "kube-api-access-rtnxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:55.999948 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data" (OuterVolumeSpecName: "config-data") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.001459 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts" (OuterVolumeSpecName: "scripts") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.095723 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.096217 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.096288 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtnxg\" (UniqueName: \"kubernetes.io/projected/4391d52b-5800-47c9-af5d-aed2b0ecbc59-kube-api-access-rtnxg\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.250040 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2641bf11-eabe-482d-90da-30c64479ae22" path="/var/lib/kubelet/pods/2641bf11-eabe-482d-90da-30c64479ae22/volumes" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.549324 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.549439 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4391d52b-5800-47c9-af5d-aed2b0ecbc59" (UID: "4391d52b-5800-47c9-af5d-aed2b0ecbc59"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.603573 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.603609 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4391d52b-5800-47c9-af5d-aed2b0ecbc59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.893984 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.894418 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.926963 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-757f6546f5-txkdm" podStartSLOduration=3.926941627 podStartE2EDuration="3.926941627s" podCreationTimestamp="2026-01-29 16:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:55:56.923420908 +0000 UTC m=+1609.410624124" watchObservedRunningTime="2026-01-29 16:55:56.926941627 +0000 UTC m=+1609.414144843" Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.968441 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:56 crc kubenswrapper[4813]: I0129 16:55:56.992485 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.006427 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.008475 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.011898 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.011957 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.016072 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.111977 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtpr\" (UniqueName: \"kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112333 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112520 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112702 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112834 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.112968 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214587 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214670 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214791 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.214810 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtpr\" (UniqueName: \"kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.215352 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.215681 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.220076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.220974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.221382 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.224999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.234831 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtpr\" (UniqueName: \"kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr\") pod \"ceilometer-0\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.324760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.840479 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:55:57 crc kubenswrapper[4813]: W0129 16:55:57.849903 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e8842a_9f0c_493c_a37c_581d1626a1ec.slice/crio-7f0e072d7ee4d6d829eb93e9cbadb1b820fc6eb23d654275ae0c691d8f936741 WatchSource:0}: Error finding container 7f0e072d7ee4d6d829eb93e9cbadb1b820fc6eb23d654275ae0c691d8f936741: Status 404 returned error can't find the container with id 7f0e072d7ee4d6d829eb93e9cbadb1b820fc6eb23d654275ae0c691d8f936741 Jan 29 16:55:57 crc kubenswrapper[4813]: I0129 16:55:57.904631 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerStarted","Data":"7f0e072d7ee4d6d829eb93e9cbadb1b820fc6eb23d654275ae0c691d8f936741"} Jan 29 16:55:58 crc kubenswrapper[4813]: I0129 16:55:58.252328 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4391d52b-5800-47c9-af5d-aed2b0ecbc59" path="/var/lib/kubelet/pods/4391d52b-5800-47c9-af5d-aed2b0ecbc59/volumes" Jan 29 16:56:02 crc kubenswrapper[4813]: I0129 16:56:02.949454 4813 generic.go:334] "Generic (PLEG): container finished" podID="941d07ef-0203-4f55-a276-cb2216375048" containerID="d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866" exitCode=0 Jan 29 16:56:02 crc kubenswrapper[4813]: I0129 16:56:02.949521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerDied","Data":"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866"} Jan 29 16:56:03 crc kubenswrapper[4813]: I0129 16:56:03.959651 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerStarted","Data":"d737f93631b272e891889e0e44614ddd00cd536a026287162eeb195058391768"} Jan 29 16:56:03 crc kubenswrapper[4813]: I0129 16:56:03.960264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerStarted","Data":"0d8366af612144e2a5062885647307bf2802910e6a7a44b00c4f431b834bfdec"} Jan 29 16:56:03 crc kubenswrapper[4813]: I0129 16:56:03.961751 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerStarted","Data":"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98"} Jan 29 16:56:03 crc kubenswrapper[4813]: I0129 16:56:03.985785 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pcg84" podStartSLOduration=45.122372216 podStartE2EDuration="1m23.985763291s" podCreationTimestamp="2026-01-29 16:54:40 +0000 UTC" firstStartedPulling="2026-01-29 16:55:24.560328914 +0000 UTC m=+1577.047532130" lastFinishedPulling="2026-01-29 16:56:03.423719989 +0000 UTC m=+1615.910923205" observedRunningTime="2026-01-29 16:56:03.980869613 +0000 UTC m=+1616.468072829" watchObservedRunningTime="2026-01-29 16:56:03.985763291 +0000 UTC m=+1616.472966507" Jan 29 16:56:04 crc kubenswrapper[4813]: E0129 16:56:04.773994 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:04 crc kubenswrapper[4813]: E0129 16:56:04.774461 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvtpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e6e8842a-9f0c-493c-a37c-581d1626a1ec): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:04 crc kubenswrapper[4813]: E0129 16:56:04.775884 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" Jan 29 16:56:04 crc kubenswrapper[4813]: I0129 16:56:04.979218 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerStarted","Data":"2f73ad869d0472c0958e52b7dac50b0b50e4069aa336a23f9f1d5fd38dc9a352"} Jan 29 16:56:04 crc kubenswrapper[4813]: E0129 16:56:04.981012 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" Jan 29 16:56:05 crc kubenswrapper[4813]: E0129 16:56:05.991656 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" Jan 29 16:56:09 crc kubenswrapper[4813]: I0129 16:56:09.022302 4813 generic.go:334] "Generic (PLEG): container finished" podID="2309ea66-f028-4c72-b6e6-009762934e48" containerID="93222a8700c2161b3d005f1f614b9f80f3a3d89e51073a5d5344932c63f7329b" exitCode=0 Jan 29 16:56:09 crc kubenswrapper[4813]: I0129 16:56:09.022399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerDied","Data":"93222a8700c2161b3d005f1f614b9f80f3a3d89e51073a5d5344932c63f7329b"} Jan 29 16:56:10 crc kubenswrapper[4813]: I0129 16:56:10.033323 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerStarted","Data":"9f570006a06221668892911b24c5a6e62fbd4b54f64b0206f5938aeda39f93b2"} Jan 29 16:56:10 crc kubenswrapper[4813]: I0129 16:56:10.057285 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gz589" podStartSLOduration=51.986799579 podStartE2EDuration="1m38.057266253s" podCreationTimestamp="2026-01-29 16:54:32 +0000 UTC" firstStartedPulling="2026-01-29 16:55:23.495664649 +0000 UTC m=+1575.982867865" lastFinishedPulling="2026-01-29 16:56:09.566131323 +0000 UTC m=+1622.053334539" observedRunningTime="2026-01-29 16:56:10.04936549 +0000 UTC m=+1622.536568706" watchObservedRunningTime="2026-01-29 16:56:10.057266253 +0000 UTC m=+1622.544469459" Jan 29 16:56:10 crc kubenswrapper[4813]: I0129 16:56:10.240139 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:56:10 crc kubenswrapper[4813]: E0129 16:56:10.240394 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:56:11 crc kubenswrapper[4813]: I0129 16:56:11.042581 4813 generic.go:334] "Generic (PLEG): container finished" podID="db7c43bd-490e-4742-8a72-ed0687faf4dd" containerID="7a62641b8e0956ea67aaca50700652a0aaf86c4bf3d1382bbda5d5d373dbf256" exitCode=0 Jan 29 16:56:11 crc kubenswrapper[4813]: I0129 16:56:11.042626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6l7qs" event={"ID":"db7c43bd-490e-4742-8a72-ed0687faf4dd","Type":"ContainerDied","Data":"7a62641b8e0956ea67aaca50700652a0aaf86c4bf3d1382bbda5d5d373dbf256"} Jan 29 16:56:11 crc kubenswrapper[4813]: I0129 16:56:11.266232 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:11 crc kubenswrapper[4813]: I0129 16:56:11.266599 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.346284 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-pcg84" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="registry-server" probeResult="failure" output=< Jan 29 16:56:12 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Jan 29 16:56:12 crc kubenswrapper[4813]: > Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.481458 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6l7qs" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.579126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.579289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.579322 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq8n4\" (UniqueName: \"kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.579355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.579400 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.580271 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs" (OuterVolumeSpecName: "logs") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.604326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4" (OuterVolumeSpecName: "kube-api-access-gq8n4") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd"). InnerVolumeSpecName "kube-api-access-gq8n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.604368 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts" (OuterVolumeSpecName: "scripts") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:12 crc kubenswrapper[4813]: E0129 16:56:12.604460 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data podName:db7c43bd-490e-4742-8a72-ed0687faf4dd nodeName:}" failed. No retries permitted until 2026-01-29 16:56:13.104439327 +0000 UTC m=+1625.591642543 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd") : error deleting /var/lib/kubelet/pods/db7c43bd-490e-4742-8a72-ed0687faf4dd/volume-subpaths: remove /var/lib/kubelet/pods/db7c43bd-490e-4742-8a72-ed0687faf4dd/volume-subpaths: no such file or directory Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.611315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.681209 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq8n4\" (UniqueName: \"kubernetes.io/projected/db7c43bd-490e-4742-8a72-ed0687faf4dd-kube-api-access-gq8n4\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.681246 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.681256 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db7c43bd-490e-4742-8a72-ed0687faf4dd-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:12 crc kubenswrapper[4813]: I0129 16:56:12.681264 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.061135 4813 generic.go:334] "Generic (PLEG): container finished" podID="a9d3644d-4332-4c4e-a354-11aa4588e143" containerID="f0e8e259d772767b69077be1ae02c8f19b4de52eb9c8b7a421a2f46277dd388d" exitCode=0 Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.061228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-72hcg" event={"ID":"a9d3644d-4332-4c4e-a354-11aa4588e143","Type":"ContainerDied","Data":"f0e8e259d772767b69077be1ae02c8f19b4de52eb9c8b7a421a2f46277dd388d"} Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.062884 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-6l7qs" event={"ID":"db7c43bd-490e-4742-8a72-ed0687faf4dd","Type":"ContainerDied","Data":"cc196734c95a3f30e6d6acd67b467ff588ff9d5b0572ae243455ea2254d2b67f"} Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.062919 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc196734c95a3f30e6d6acd67b467ff588ff9d5b0572ae243455ea2254d2b67f" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.062956 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-6l7qs" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.189300 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") pod \"db7c43bd-490e-4742-8a72-ed0687faf4dd\" (UID: \"db7c43bd-490e-4742-8a72-ed0687faf4dd\") " Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.202311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data" (OuterVolumeSpecName: "config-data") pod "db7c43bd-490e-4742-8a72-ed0687faf4dd" (UID: "db7c43bd-490e-4742-8a72-ed0687faf4dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.241208 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 16:56:13 crc kubenswrapper[4813]: E0129 16:56:13.241614 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" containerName="placement-db-sync" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.241640 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" containerName="placement-db-sync" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.241881 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" containerName="placement-db-sync" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.243039 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.245704 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.245931 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.271561 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.279784 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.286231 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.291915 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db7c43bd-490e-4742-8a72-ed0687faf4dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.393826 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.393941 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.393997 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.394050 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.394076 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.394107 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.394149 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bc8n\" (UniqueName: \"kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495517 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495636 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495661 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495684 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bc8n\" (UniqueName: \"kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.495752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.496773 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.500305 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.500620 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.500692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.500701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.508745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.517535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bc8n\" (UniqueName: \"kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n\") pod \"placement-7dcb464dcd-dklmw\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:13 crc kubenswrapper[4813]: I0129 16:56:13.566515 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.060976 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 16:56:14 crc kubenswrapper[4813]: W0129 16:56:14.080302 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode504c0d2_5734_47b7_aa7f_4cdb2a339d41.slice/crio-52ef1d8e954414dc903d1f35dbdc42e5301f7aca7139b39bab67daf2642e0526 WatchSource:0}: Error finding container 52ef1d8e954414dc903d1f35dbdc42e5301f7aca7139b39bab67daf2642e0526: Status 404 returned error can't find the container with id 52ef1d8e954414dc903d1f35dbdc42e5301f7aca7139b39bab67daf2642e0526 Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.338373 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gz589" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="registry-server" probeResult="failure" output=< Jan 29 16:56:14 crc kubenswrapper[4813]: timeout: failed to connect service ":50051" within 1s Jan 29 16:56:14 crc kubenswrapper[4813]: > Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.387161 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-72hcg" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.512761 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data\") pod \"a9d3644d-4332-4c4e-a354-11aa4588e143\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.512855 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5fjx\" (UniqueName: \"kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx\") pod \"a9d3644d-4332-4c4e-a354-11aa4588e143\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.512998 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle\") pod \"a9d3644d-4332-4c4e-a354-11aa4588e143\" (UID: \"a9d3644d-4332-4c4e-a354-11aa4588e143\") " Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.521229 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx" (OuterVolumeSpecName: "kube-api-access-k5fjx") pod "a9d3644d-4332-4c4e-a354-11aa4588e143" (UID: "a9d3644d-4332-4c4e-a354-11aa4588e143"). InnerVolumeSpecName "kube-api-access-k5fjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.521821 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a9d3644d-4332-4c4e-a354-11aa4588e143" (UID: "a9d3644d-4332-4c4e-a354-11aa4588e143"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.547754 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9d3644d-4332-4c4e-a354-11aa4588e143" (UID: "a9d3644d-4332-4c4e-a354-11aa4588e143"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.615400 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.615438 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a9d3644d-4332-4c4e-a354-11aa4588e143-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:14 crc kubenswrapper[4813]: I0129 16:56:14.615453 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5fjx\" (UniqueName: \"kubernetes.io/projected/a9d3644d-4332-4c4e-a354-11aa4588e143-kube-api-access-k5fjx\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.094365 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-72hcg" event={"ID":"a9d3644d-4332-4c4e-a354-11aa4588e143","Type":"ContainerDied","Data":"14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4"} Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.094400 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-72hcg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.094413 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d80d20534fb3c479ef21fa01a55a510cb657561b4941cd82fd308e08e3f5d4" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.098444 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerStarted","Data":"b2f593231b9c6777313a8a59f22732777b3c29a364623b14415c2eec818f17c5"} Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.098488 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerStarted","Data":"e28682307f39725fe7f765dc526b169a558cafd630133cfcaa46e4c31f5198f9"} Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.098500 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerStarted","Data":"52ef1d8e954414dc903d1f35dbdc42e5301f7aca7139b39bab67daf2642e0526"} Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.098608 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.126648 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7dcb464dcd-dklmw" podStartSLOduration=2.126631016 podStartE2EDuration="2.126631016s" podCreationTimestamp="2026-01-29 16:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:15.120685522 +0000 UTC m=+1627.607888748" watchObservedRunningTime="2026-01-29 16:56:15.126631016 +0000 UTC m=+1627.613834232" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.363336 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 16:56:15 crc kubenswrapper[4813]: E0129 16:56:15.363763 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" containerName="barbican-db-sync" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.363781 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" containerName="barbican-db-sync" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.364010 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" containerName="barbican-db-sync" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.365294 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.368487 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.369152 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ht4s5" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.371680 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.425382 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.426716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.426933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.427089 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn992\" (UniqueName: \"kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.427440 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.427573 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.453165 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.454915 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.469135 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.509032 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.531519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.532012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.532094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.532244 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.532340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.534453 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.538537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.538660 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.538714 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsf88\" (UniqueName: \"kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.538768 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.538858 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn992\" (UniqueName: \"kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.539494 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.546820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.556127 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.571653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn992\" (UniqueName: \"kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992\") pod \"barbican-worker-5b9868448c-29tkg\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.618843 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.620685 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643391 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643455 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643531 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsf88\" (UniqueName: \"kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.643992 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.650181 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.656676 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.665872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.690520 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.697879 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.711655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsf88\" (UniqueName: \"kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88\") pod \"barbican-keystone-listener-864b755644-8j2zm\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.744927 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.744983 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.745037 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.745067 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.745098 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnqn\" (UniqueName: \"kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.745142 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.799779 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.802665 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.807461 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.810533 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.811800 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846286 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846446 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846535 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846612 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846687 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846790 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8s7m\" (UniqueName: \"kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846877 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.846943 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847235 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnqn\" (UniqueName: \"kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847445 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847957 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.847997 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.875329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnqn\" (UniqueName: \"kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn\") pod \"dnsmasq-dns-8fffc8985-c8tzj\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.949062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.949205 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.949230 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.949284 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8s7m\" (UniqueName: \"kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.949366 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.950323 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.954180 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.956017 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.960766 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:15 crc kubenswrapper[4813]: I0129 16:56:15.973592 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8s7m\" (UniqueName: \"kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m\") pod \"barbican-api-84dbdc9b7b-ttpgj\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.069647 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.119891 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.150558 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.219804 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.303310 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.640532 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:16 crc kubenswrapper[4813]: I0129 16:56:16.743434 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:16 crc kubenswrapper[4813]: W0129 16:56:16.755450 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f680f7d_76bb_484d_9158_906647eb8e6f.slice/crio-d3a9e8a466cb9f8bbb059dfd4158e0a8b3cc48b01d678e78b99a7d46e0a36718 WatchSource:0}: Error finding container d3a9e8a466cb9f8bbb059dfd4158e0a8b3cc48b01d678e78b99a7d46e0a36718: Status 404 returned error can't find the container with id d3a9e8a466cb9f8bbb059dfd4158e0a8b3cc48b01d678e78b99a7d46e0a36718 Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.137360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerStarted","Data":"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d"} Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.137718 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerStarted","Data":"d3a9e8a466cb9f8bbb059dfd4158e0a8b3cc48b01d678e78b99a7d46e0a36718"} Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.139792 4813 generic.go:334] "Generic (PLEG): container finished" podID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerID="242845c15e2727e3a31cb4d22d31b47fe6e19a8339c92e24bd42cf6af78466c5" exitCode=0 Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.139866 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" event={"ID":"82bc41a3-e85a-455e-ab4e-75a839935abe","Type":"ContainerDied","Data":"242845c15e2727e3a31cb4d22d31b47fe6e19a8339c92e24bd42cf6af78466c5"} Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.139896 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" event={"ID":"82bc41a3-e85a-455e-ab4e-75a839935abe","Type":"ContainerStarted","Data":"27fc2e2b35e355f6f7ae180afa1ed0aca0cb1a3368014df8fb33f47746547dd9"} Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.142403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerStarted","Data":"9921a16df065f5224071914630f307aeb05d8de644fdcb8ebb09308052e0fe51"} Jan 29 16:56:17 crc kubenswrapper[4813]: I0129 16:56:17.143901 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerStarted","Data":"24573193c22eb48d12a34100def86f158f24ea567d8a2096503adcd6288de301"} Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.177017 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerStarted","Data":"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a"} Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.177189 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.177229 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.182001 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" event={"ID":"82bc41a3-e85a-455e-ab4e-75a839935abe","Type":"ContainerStarted","Data":"afe48bc1f4f9bb2301abd2f5d0d91262dd07a5bb6f5439d546425fad2348981a"} Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.182230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.203198 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" podStartSLOduration=3.203183001 podStartE2EDuration="3.203183001s" podCreationTimestamp="2026-01-29 16:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:18.195299944 +0000 UTC m=+1630.682503160" watchObservedRunningTime="2026-01-29 16:56:18.203183001 +0000 UTC m=+1630.690386237" Jan 29 16:56:18 crc kubenswrapper[4813]: I0129 16:56:18.214214 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" podStartSLOduration=3.214149568 podStartE2EDuration="3.214149568s" podCreationTimestamp="2026-01-29 16:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:18.213138025 +0000 UTC m=+1630.700341241" watchObservedRunningTime="2026-01-29 16:56:18.214149568 +0000 UTC m=+1630.701352794" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.194339 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerStarted","Data":"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043"} Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.194691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerStarted","Data":"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b"} Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.199192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerStarted","Data":"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891"} Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.199274 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerStarted","Data":"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e"} Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.201518 4813 generic.go:334] "Generic (PLEG): container finished" podID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" containerID="405b32419ba7c2468a5c44f5a8ea0559199efb36440983f8204fe5b78e3d66c2" exitCode=0 Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.201714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwds2" event={"ID":"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6","Type":"ContainerDied","Data":"405b32419ba7c2468a5c44f5a8ea0559199efb36440983f8204fe5b78e3d66c2"} Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.218775 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" podStartSLOduration=2.226675577 podStartE2EDuration="4.218757316s" podCreationTimestamp="2026-01-29 16:56:15 +0000 UTC" firstStartedPulling="2026-01-29 16:56:16.267358768 +0000 UTC m=+1628.754561984" lastFinishedPulling="2026-01-29 16:56:18.259440507 +0000 UTC m=+1630.746643723" observedRunningTime="2026-01-29 16:56:19.209815955 +0000 UTC m=+1631.697019181" watchObservedRunningTime="2026-01-29 16:56:19.218757316 +0000 UTC m=+1631.705960532" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.251816 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b9868448c-29tkg" podStartSLOduration=2.32150783 podStartE2EDuration="4.251794959s" podCreationTimestamp="2026-01-29 16:56:15 +0000 UTC" firstStartedPulling="2026-01-29 16:56:16.323253105 +0000 UTC m=+1628.810456321" lastFinishedPulling="2026-01-29 16:56:18.253540234 +0000 UTC m=+1630.740743450" observedRunningTime="2026-01-29 16:56:19.250370877 +0000 UTC m=+1631.737574103" watchObservedRunningTime="2026-01-29 16:56:19.251794959 +0000 UTC m=+1631.738998175" Jan 29 16:56:19 crc kubenswrapper[4813]: E0129 16:56:19.363920 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:19 crc kubenswrapper[4813]: E0129 16:56:19.364171 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvtpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e6e8842a-9f0c-493c-a37c-581d1626a1ec): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:19 crc kubenswrapper[4813]: E0129 16:56:19.365919 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.617048 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.618949 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.622252 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.622272 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.632761 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w898c\" (UniqueName: \"kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729434 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729469 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729495 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.729560 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.831727 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832051 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832252 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832426 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w898c\" (UniqueName: \"kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832555 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832621 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832806 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.832961 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.837501 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.837785 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.838510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.841134 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.852587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.853091 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w898c\" (UniqueName: \"kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c\") pod \"barbican-api-65d9b85856-5rdmb\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:19 crc kubenswrapper[4813]: I0129 16:56:19.936300 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.415879 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.555853 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwds2" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645305 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645457 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645478 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645546 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rlh\" (UniqueName: \"kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645593 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id\") pod \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\" (UID: \"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6\") " Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.645937 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.659886 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.661787 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh" (OuterVolumeSpecName: "kube-api-access-v4rlh") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "kube-api-access-v4rlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.676300 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts" (OuterVolumeSpecName: "scripts") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.693213 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.730190 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data" (OuterVolumeSpecName: "config-data") pod "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" (UID: "944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747532 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rlh\" (UniqueName: \"kubernetes.io/projected/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-kube-api-access-v4rlh\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747578 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747590 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747600 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747614 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:20 crc kubenswrapper[4813]: I0129 16:56:20.747623 4813 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.224321 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerStarted","Data":"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a"} Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.224710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerStarted","Data":"12b3250852d091e4650dc92a412de53c26b0015c80225f790b4c807478790e2e"} Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.226104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwds2" event={"ID":"944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6","Type":"ContainerDied","Data":"2add7b406209ef570d3983a120196acc30660abba6c14eac6269f2a50b90583f"} Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.226177 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2add7b406209ef570d3983a120196acc30660abba6c14eac6269f2a50b90583f" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.226238 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwds2" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.329922 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.442197 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.592699 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.603063 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:21 crc kubenswrapper[4813]: E0129 16:56:21.603434 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" containerName="cinder-db-sync" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.603447 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" containerName="cinder-db-sync" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.603640 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" containerName="cinder-db-sync" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.604655 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.608134 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.608515 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.608672 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-frg6g" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.608978 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.635556 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673726 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673833 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673936 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76vm7\" (UniqueName: \"kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673973 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.673989 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.692159 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.692611 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="dnsmasq-dns" containerID="cri-o://afe48bc1f4f9bb2301abd2f5d0d91262dd07a5bb6f5439d546425fad2348981a" gracePeriod=10 Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.735001 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.743514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.769915 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778263 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778296 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rl5p\" (UniqueName: \"kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778332 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778352 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778375 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778401 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76vm7\" (UniqueName: \"kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778425 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778442 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778483 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.778524 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.782312 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.786647 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.786667 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.786672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.789051 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.817775 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76vm7\" (UniqueName: \"kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7\") pod \"cinder-scheduler-0\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892543 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892664 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892711 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rl5p\" (UniqueName: \"kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892738 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892766 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.892795 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.893980 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.895755 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.895937 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.896030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.897903 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.921863 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rl5p\" (UniqueName: \"kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p\") pod \"dnsmasq-dns-6d96cd6c9c-z54c4\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.938197 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.964186 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.965798 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.968204 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.990917 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.994753 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.994810 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.994981 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.995019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.995176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.995324 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhd8\" (UniqueName: \"kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:21 crc kubenswrapper[4813]: I0129 16:56:21.995365 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.079071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.097198 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.097246 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhd8\" (UniqueName: \"kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.097268 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.099406 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102578 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102623 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102639 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102739 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.102957 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.117098 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.118517 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.126729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.133719 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhd8\" (UniqueName: \"kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8\") pod \"cinder-api-0\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.242401 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:56:22 crc kubenswrapper[4813]: E0129 16:56:22.242691 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.250402 4813 generic.go:334] "Generic (PLEG): container finished" podID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerID="afe48bc1f4f9bb2301abd2f5d0d91262dd07a5bb6f5439d546425fad2348981a" exitCode=0 Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.265855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" event={"ID":"82bc41a3-e85a-455e-ab4e-75a839935abe","Type":"ContainerDied","Data":"afe48bc1f4f9bb2301abd2f5d0d91262dd07a5bb6f5439d546425fad2348981a"} Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.267787 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerStarted","Data":"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8"} Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.267834 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.267852 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.296422 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65d9b85856-5rdmb" podStartSLOduration=3.296409186 podStartE2EDuration="3.296409186s" podCreationTimestamp="2026-01-29 16:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:22.294854481 +0000 UTC m=+1634.782057697" watchObservedRunningTime="2026-01-29 16:56:22.296409186 +0000 UTC m=+1634.783612402" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.355582 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.507824 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.699287 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.721681 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725434 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725534 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725565 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725663 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.725694 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brnqn\" (UniqueName: \"kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn\") pod \"82bc41a3-e85a-455e-ab4e-75a839935abe\" (UID: \"82bc41a3-e85a-455e-ab4e-75a839935abe\") " Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.733135 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn" (OuterVolumeSpecName: "kube-api-access-brnqn") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "kube-api-access-brnqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.795421 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config" (OuterVolumeSpecName: "config") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.803658 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.808080 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.808603 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.813897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82bc41a3-e85a-455e-ab4e-75a839935abe" (UID: "82bc41a3-e85a-455e-ab4e-75a839935abe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829536 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829586 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829598 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829609 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829620 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82bc41a3-e85a-455e-ab4e-75a839935abe-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.829629 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brnqn\" (UniqueName: \"kubernetes.io/projected/82bc41a3-e85a-455e-ab4e-75a839935abe-kube-api-access-brnqn\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:22 crc kubenswrapper[4813]: I0129 16:56:22.868244 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:22 crc kubenswrapper[4813]: W0129 16:56:22.888752 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d3c01e5_8a01_4c96_baef_2cf6b14c76a5.slice/crio-dfbaaf1b9c1dd06dbece2a93b32a4939580d46ae54479b8a187ed15a042aa13d WatchSource:0}: Error finding container dfbaaf1b9c1dd06dbece2a93b32a4939580d46ae54479b8a187ed15a042aa13d: Status 404 returned error can't find the container with id dfbaaf1b9c1dd06dbece2a93b32a4939580d46ae54479b8a187ed15a042aa13d Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.277855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerStarted","Data":"dfbaaf1b9c1dd06dbece2a93b32a4939580d46ae54479b8a187ed15a042aa13d"} Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.282508 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" event={"ID":"82bc41a3-e85a-455e-ab4e-75a839935abe","Type":"ContainerDied","Data":"27fc2e2b35e355f6f7ae180afa1ed0aca0cb1a3368014df8fb33f47746547dd9"} Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.282555 4813 scope.go:117] "RemoveContainer" containerID="afe48bc1f4f9bb2301abd2f5d0d91262dd07a5bb6f5439d546425fad2348981a" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.282654 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fffc8985-c8tzj" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.296187 4813 generic.go:334] "Generic (PLEG): container finished" podID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerID="bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4" exitCode=0 Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.296300 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" event={"ID":"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7","Type":"ContainerDied","Data":"bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4"} Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.296392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" event={"ID":"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7","Type":"ContainerStarted","Data":"0708d2e1d97f96cdb02dcee8b1d912c5c5d62247e12685563b32ae264825c17b"} Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.305577 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerStarted","Data":"3b9527b0f7a2e154e0dd37d40ed4e87ef756bba5e490c08edf0395ac0066b704"} Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.305753 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pcg84" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="registry-server" containerID="cri-o://96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98" gracePeriod=2 Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.337836 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.371520 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.386522 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fffc8985-c8tzj"] Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.408164 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.423525 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.466122 4813 scope.go:117] "RemoveContainer" containerID="242845c15e2727e3a31cb4d22d31b47fe6e19a8339c92e24bd42cf6af78466c5" Jan 29 16:56:23 crc kubenswrapper[4813]: I0129 16:56:23.968716 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.051846 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.074129 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities\") pod \"941d07ef-0203-4f55-a276-cb2216375048\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.074250 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh6w4\" (UniqueName: \"kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4\") pod \"941d07ef-0203-4f55-a276-cb2216375048\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.075453 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content\") pod \"941d07ef-0203-4f55-a276-cb2216375048\" (UID: \"941d07ef-0203-4f55-a276-cb2216375048\") " Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.076844 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities" (OuterVolumeSpecName: "utilities") pod "941d07ef-0203-4f55-a276-cb2216375048" (UID: "941d07ef-0203-4f55-a276-cb2216375048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.085045 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4" (OuterVolumeSpecName: "kube-api-access-zh6w4") pod "941d07ef-0203-4f55-a276-cb2216375048" (UID: "941d07ef-0203-4f55-a276-cb2216375048"). InnerVolumeSpecName "kube-api-access-zh6w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.132266 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "941d07ef-0203-4f55-a276-cb2216375048" (UID: "941d07ef-0203-4f55-a276-cb2216375048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.178710 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.178751 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh6w4\" (UniqueName: \"kubernetes.io/projected/941d07ef-0203-4f55-a276-cb2216375048-kube-api-access-zh6w4\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.178763 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941d07ef-0203-4f55-a276-cb2216375048-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.256503 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" path="/var/lib/kubelet/pods/82bc41a3-e85a-455e-ab4e-75a839935abe/volumes" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.318450 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" event={"ID":"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7","Type":"ContainerStarted","Data":"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c"} Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.318530 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.323608 4813 generic.go:334] "Generic (PLEG): container finished" podID="941d07ef-0203-4f55-a276-cb2216375048" containerID="96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98" exitCode=0 Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.323659 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pcg84" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.323707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerDied","Data":"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98"} Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.323760 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pcg84" event={"ID":"941d07ef-0203-4f55-a276-cb2216375048","Type":"ContainerDied","Data":"76c6bfe1026a8a31c2cf5a99567d55e7e1edcea25141ea2b0d9de019623d717b"} Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.323785 4813 scope.go:117] "RemoveContainer" containerID="96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.326045 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerStarted","Data":"8d41e1102790e2887bdda61240a62875c8570e96d8db2e815cad41187291ea40"} Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.356237 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" podStartSLOduration=3.35621279 podStartE2EDuration="3.35621279s" podCreationTimestamp="2026-01-29 16:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:24.338236056 +0000 UTC m=+1636.825439282" watchObservedRunningTime="2026-01-29 16:56:24.35621279 +0000 UTC m=+1636.843416006" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.373393 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.375644 4813 scope.go:117] "RemoveContainer" containerID="d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.382570 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pcg84"] Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.406144 4813 scope.go:117] "RemoveContainer" containerID="fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.467187 4813 scope.go:117] "RemoveContainer" containerID="96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98" Jan 29 16:56:24 crc kubenswrapper[4813]: E0129 16:56:24.468831 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98\": container with ID starting with 96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98 not found: ID does not exist" containerID="96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.468876 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98"} err="failed to get container status \"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98\": rpc error: code = NotFound desc = could not find container \"96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98\": container with ID starting with 96cd58824c7d82387adebcd779a4d400bd53fdf097813f865a4b066c40c9be98 not found: ID does not exist" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.468905 4813 scope.go:117] "RemoveContainer" containerID="d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866" Jan 29 16:56:24 crc kubenswrapper[4813]: E0129 16:56:24.469284 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866\": container with ID starting with d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866 not found: ID does not exist" containerID="d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.469318 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866"} err="failed to get container status \"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866\": rpc error: code = NotFound desc = could not find container \"d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866\": container with ID starting with d50312f0f763c436317269d9cd25348c7c00f64346f23de89714c01800fb5866 not found: ID does not exist" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.469339 4813 scope.go:117] "RemoveContainer" containerID="fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b" Jan 29 16:56:24 crc kubenswrapper[4813]: E0129 16:56:24.469673 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b\": container with ID starting with fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b not found: ID does not exist" containerID="fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.469696 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b"} err="failed to get container status \"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b\": rpc error: code = NotFound desc = could not find container \"fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b\": container with ID starting with fd84c491063a3f35d1a4babd260bf93a5759fd10c8c827adb69ce16c322f641b not found: ID does not exist" Jan 29 16:56:24 crc kubenswrapper[4813]: I0129 16:56:24.758618 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.344833 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerStarted","Data":"21737e70b5e6ff1ffa00203fee54ff2f846e67b88caf87dbbfeaa7c7e5140889"} Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.346051 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.353310 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerStarted","Data":"597e6f46380ef38ac2bcadbe64956e102b8328ca5be05feb12c2604ede18b713"} Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.353479 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gz589" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="registry-server" containerID="cri-o://9f570006a06221668892911b24c5a6e62fbd4b54f64b0206f5938aeda39f93b2" gracePeriod=2 Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.354033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerStarted","Data":"097373cdf5d1b91fd692661a385eac765660eaac7045388ac48a76091664fe60"} Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.379949 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.379932728 podStartE2EDuration="4.379932728s" podCreationTimestamp="2026-01-29 16:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:25.3724689 +0000 UTC m=+1637.859672116" watchObservedRunningTime="2026-01-29 16:56:25.379932728 +0000 UTC m=+1637.867135944" Jan 29 16:56:25 crc kubenswrapper[4813]: I0129 16:56:25.675972 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.251959 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941d07ef-0203-4f55-a276-cb2216375048" path="/var/lib/kubelet/pods/941d07ef-0203-4f55-a276-cb2216375048/volumes" Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.271557 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.362274 4813 generic.go:334] "Generic (PLEG): container finished" podID="2309ea66-f028-4c72-b6e6-009762934e48" containerID="9f570006a06221668892911b24c5a6e62fbd4b54f64b0206f5938aeda39f93b2" exitCode=0 Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.362340 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerDied","Data":"9f570006a06221668892911b24c5a6e62fbd4b54f64b0206f5938aeda39f93b2"} Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.362751 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api-log" containerID="cri-o://8d41e1102790e2887bdda61240a62875c8570e96d8db2e815cad41187291ea40" gracePeriod=30 Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.362763 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api" containerID="cri-o://21737e70b5e6ff1ffa00203fee54ff2f846e67b88caf87dbbfeaa7c7e5140889" gracePeriod=30 Jan 29 16:56:26 crc kubenswrapper[4813]: I0129 16:56:26.938751 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.373636 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerID="8d41e1102790e2887bdda61240a62875c8570e96d8db2e815cad41187291ea40" exitCode=143 Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.374586 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerDied","Data":"8d41e1102790e2887bdda61240a62875c8570e96d8db2e815cad41187291ea40"} Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.622431 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.643342 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.639942633 podStartE2EDuration="6.643322924s" podCreationTimestamp="2026-01-29 16:56:21 +0000 UTC" firstStartedPulling="2026-01-29 16:56:22.520631451 +0000 UTC m=+1635.007834667" lastFinishedPulling="2026-01-29 16:56:23.524011742 +0000 UTC m=+1636.011214958" observedRunningTime="2026-01-29 16:56:26.401753303 +0000 UTC m=+1638.888956519" watchObservedRunningTime="2026-01-29 16:56:27.643322924 +0000 UTC m=+1640.130526140" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.649451 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities\") pod \"2309ea66-f028-4c72-b6e6-009762934e48\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.649705 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content\") pod \"2309ea66-f028-4c72-b6e6-009762934e48\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.649769 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgv86\" (UniqueName: \"kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86\") pod \"2309ea66-f028-4c72-b6e6-009762934e48\" (UID: \"2309ea66-f028-4c72-b6e6-009762934e48\") " Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.658126 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities" (OuterVolumeSpecName: "utilities") pod "2309ea66-f028-4c72-b6e6-009762934e48" (UID: "2309ea66-f028-4c72-b6e6-009762934e48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.658539 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86" (OuterVolumeSpecName: "kube-api-access-pgv86") pod "2309ea66-f028-4c72-b6e6-009762934e48" (UID: "2309ea66-f028-4c72-b6e6-009762934e48"). InnerVolumeSpecName "kube-api-access-pgv86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.754211 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.754453 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgv86\" (UniqueName: \"kubernetes.io/projected/2309ea66-f028-4c72-b6e6-009762934e48-kube-api-access-pgv86\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.834949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2309ea66-f028-4c72-b6e6-009762934e48" (UID: "2309ea66-f028-4c72-b6e6-009762934e48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:27 crc kubenswrapper[4813]: I0129 16:56:27.858441 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2309ea66-f028-4c72-b6e6-009762934e48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.384501 4813 generic.go:334] "Generic (PLEG): container finished" podID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerID="21737e70b5e6ff1ffa00203fee54ff2f846e67b88caf87dbbfeaa7c7e5140889" exitCode=0 Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.384861 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerDied","Data":"21737e70b5e6ff1ffa00203fee54ff2f846e67b88caf87dbbfeaa7c7e5140889"} Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.388208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz589" event={"ID":"2309ea66-f028-4c72-b6e6-009762934e48","Type":"ContainerDied","Data":"37f1f478fd16083ecacc92d2319d3fff02e050377980af803ac00bc6d2c6e8bf"} Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.388257 4813 scope.go:117] "RemoveContainer" containerID="9f570006a06221668892911b24c5a6e62fbd4b54f64b0206f5938aeda39f93b2" Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.388329 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz589" Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.422395 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.423547 4813 scope.go:117] "RemoveContainer" containerID="93222a8700c2161b3d005f1f614b9f80f3a3d89e51073a5d5344932c63f7329b" Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.431486 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gz589"] Jan 29 16:56:28 crc kubenswrapper[4813]: I0129 16:56:28.457313 4813 scope.go:117] "RemoveContainer" containerID="6502ab3a984dfe4241e27edccf40f94cf2b56cacf279a4075bbfada3075971a4" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.044739 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.092702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.092803 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.092886 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.092976 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.093062 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nhd8\" (UniqueName: \"kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.093159 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.093200 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data\") pod \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\" (UID: \"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5\") " Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.094975 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs" (OuterVolumeSpecName: "logs") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.097689 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.100499 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8" (OuterVolumeSpecName: "kube-api-access-2nhd8") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "kube-api-access-2nhd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.101204 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.105215 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts" (OuterVolumeSpecName: "scripts") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.125397 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.149159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data" (OuterVolumeSpecName: "config-data") pod "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" (UID: "8d3c01e5-8a01-4c96-baef-2cf6b14c76a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195783 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195833 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195846 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195858 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195869 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nhd8\" (UniqueName: \"kubernetes.io/projected/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-kube-api-access-2nhd8\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195882 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.195893 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.400399 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d3c01e5-8a01-4c96-baef-2cf6b14c76a5","Type":"ContainerDied","Data":"dfbaaf1b9c1dd06dbece2a93b32a4939580d46ae54479b8a187ed15a042aa13d"} Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.400463 4813 scope.go:117] "RemoveContainer" containerID="21737e70b5e6ff1ffa00203fee54ff2f846e67b88caf87dbbfeaa7c7e5140889" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.400573 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.443554 4813 scope.go:117] "RemoveContainer" containerID="8d41e1102790e2887bdda61240a62875c8570e96d8db2e815cad41187291ea40" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.448047 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.460871 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476075 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476522 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476586 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476629 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="dnsmasq-dns" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476657 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="dnsmasq-dns" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476670 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="extract-utilities" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476677 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="extract-utilities" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476693 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="extract-content" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476699 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="extract-content" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476729 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="init" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476737 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="init" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476750 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="extract-utilities" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476757 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="extract-utilities" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476772 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api-log" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476779 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api-log" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476813 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="extract-content" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476822 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="extract-content" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476828 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476836 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: E0129 16:56:29.476852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.476858 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.477390 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.477402 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="941d07ef-0203-4f55-a276-cb2216375048" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.477440 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2309ea66-f028-4c72-b6e6-009762934e48" containerName="registry-server" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.477461 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="82bc41a3-e85a-455e-ab4e-75a839935abe" containerName="dnsmasq-dns" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.477471 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" containerName="cinder-api-log" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.478856 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.481291 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.481548 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.481708 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.497077 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602041 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2gf\" (UniqueName: \"kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602094 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602132 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602165 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602292 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602339 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602377 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.602401 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703812 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2gf\" (UniqueName: \"kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703863 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703880 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703932 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703956 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.703980 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.704003 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.704028 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.704691 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.705377 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.709625 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.711815 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.714653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.716997 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.718862 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.722692 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.723532 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2gf\" (UniqueName: \"kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf\") pod \"cinder-api-0\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " pod="openstack/cinder-api-0" Jan 29 16:56:29 crc kubenswrapper[4813]: I0129 16:56:29.808487 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:56:30 crc kubenswrapper[4813]: I0129 16:56:30.253684 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2309ea66-f028-4c72-b6e6-009762934e48" path="/var/lib/kubelet/pods/2309ea66-f028-4c72-b6e6-009762934e48/volumes" Jan 29 16:56:30 crc kubenswrapper[4813]: I0129 16:56:30.255096 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3c01e5-8a01-4c96-baef-2cf6b14c76a5" path="/var/lib/kubelet/pods/8d3c01e5-8a01-4c96-baef-2cf6b14c76a5/volumes" Jan 29 16:56:30 crc kubenswrapper[4813]: I0129 16:56:30.280664 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:56:30 crc kubenswrapper[4813]: I0129 16:56:30.417724 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerStarted","Data":"3490996d3aad804b072a49ce30c9f6d370b0844f749b53aa9772ee7c8c5eab25"} Jan 29 16:56:31 crc kubenswrapper[4813]: E0129 16:56:31.242761 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.435349 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerStarted","Data":"8a7ea31b28e2c6bfe834bf71fe3196b448efd62becd905ed803c99a217f36124"} Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.448866 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.450164 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.454250 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.455220 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-khltg" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.458682 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.492368 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.558384 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b99w\" (UniqueName: \"kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.558496 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.558814 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.558956 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.660345 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.660794 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.660849 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.660888 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b99w\" (UniqueName: \"kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.661739 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.667849 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.675590 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.683181 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b99w\" (UniqueName: \"kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w\") pod \"openstackclient\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " pod="openstack/openstackclient" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.754528 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:31 crc kubenswrapper[4813]: I0129 16:56:31.771985 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.065212 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.082699 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.191396 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.191779 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api-log" containerID="cri-o://ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d" gracePeriod=30 Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.192508 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api" containerID="cri-o://49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a" gracePeriod=30 Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.311547 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.312409 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="dnsmasq-dns" containerID="cri-o://b2f039f37c8b1bc651f0a8f050187759359507ddf84edbd6a349b6f625f8453a" gracePeriod=10 Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.370296 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.468636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"124e029a-6d27-4b49-830c-4be46fc186cc","Type":"ContainerStarted","Data":"f66d28d6004aca51a8c1763593969c26bbc53c0130f6920d0833e77b990aa436"} Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.481709 4813 generic.go:334] "Generic (PLEG): container finished" podID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerID="ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d" exitCode=143 Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.481811 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerDied","Data":"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d"} Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.496475 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerStarted","Data":"33964ffaf6cb92f00377625e2792de4b5bdd034d751dddf7790ae69bc7e4ed47"} Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.497280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.503302 4813 generic.go:334] "Generic (PLEG): container finished" podID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerID="b2f039f37c8b1bc651f0a8f050187759359507ddf84edbd6a349b6f625f8453a" exitCode=0 Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.503845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" event={"ID":"04dc22f3-d546-420e-96d0-103b9b9607d5","Type":"ContainerDied","Data":"b2f039f37c8b1bc651f0a8f050187759359507ddf84edbd6a349b6f625f8453a"} Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.535131 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.535088898 podStartE2EDuration="3.535088898s" podCreationTimestamp="2026-01-29 16:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:32.531753283 +0000 UTC m=+1645.018956499" watchObservedRunningTime="2026-01-29 16:56:32.535088898 +0000 UTC m=+1645.022292114" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.598213 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.677134 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:32 crc kubenswrapper[4813]: I0129 16:56:32.877252 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.012320 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gzs6\" (UniqueName: \"kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.012503 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.012573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.012631 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.012707 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.013965 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb\") pod \"04dc22f3-d546-420e-96d0-103b9b9607d5\" (UID: \"04dc22f3-d546-420e-96d0-103b9b9607d5\") " Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.032402 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6" (OuterVolumeSpecName: "kube-api-access-2gzs6") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "kube-api-access-2gzs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.082407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.085307 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.088303 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.088389 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config" (OuterVolumeSpecName: "config") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.090719 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04dc22f3-d546-420e-96d0-103b9b9607d5" (UID: "04dc22f3-d546-420e-96d0-103b9b9607d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120377 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gzs6\" (UniqueName: \"kubernetes.io/projected/04dc22f3-d546-420e-96d0-103b9b9607d5-kube-api-access-2gzs6\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120419 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120430 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120441 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120452 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.120462 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04dc22f3-d546-420e-96d0-103b9b9607d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.515633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" event={"ID":"04dc22f3-d546-420e-96d0-103b9b9607d5","Type":"ContainerDied","Data":"160d8c3cbc114f3eb00e7af6f78aa268ae6f109ea014daad77c651d0d784641f"} Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.515704 4813 scope.go:117] "RemoveContainer" containerID="b2f039f37c8b1bc651f0a8f050187759359507ddf84edbd6a349b6f625f8453a" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.516807 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="cinder-scheduler" containerID="cri-o://097373cdf5d1b91fd692661a385eac765660eaac7045388ac48a76091664fe60" gracePeriod=30 Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.517017 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="probe" containerID="cri-o://597e6f46380ef38ac2bcadbe64956e102b8328ca5be05feb12c2604ede18b713" gracePeriod=30 Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.517218 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-9fd9p" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.559014 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.566926 4813 scope.go:117] "RemoveContainer" containerID="a6948d92b7dcb872b5c7356b2e950eba1f719743fab090e54cc404d2863fbd48" Jan 29 16:56:33 crc kubenswrapper[4813]: I0129 16:56:33.572270 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-9fd9p"] Jan 29 16:56:34 crc kubenswrapper[4813]: I0129 16:56:34.254384 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" path="/var/lib/kubelet/pods/04dc22f3-d546-420e-96d0-103b9b9607d5/volumes" Jan 29 16:56:34 crc kubenswrapper[4813]: I0129 16:56:34.537558 4813 generic.go:334] "Generic (PLEG): container finished" podID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerID="597e6f46380ef38ac2bcadbe64956e102b8328ca5be05feb12c2604ede18b713" exitCode=0 Jan 29 16:56:34 crc kubenswrapper[4813]: I0129 16:56:34.539324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerDied","Data":"597e6f46380ef38ac2bcadbe64956e102b8328ca5be05feb12c2604ede18b713"} Jan 29 16:56:35 crc kubenswrapper[4813]: I0129 16:56:35.552543 4813 generic.go:334] "Generic (PLEG): container finished" podID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerID="097373cdf5d1b91fd692661a385eac765660eaac7045388ac48a76091664fe60" exitCode=0 Jan 29 16:56:35 crc kubenswrapper[4813]: I0129 16:56:35.553361 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerDied","Data":"097373cdf5d1b91fd692661a385eac765660eaac7045388ac48a76091664fe60"} Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.198566 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.204608 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.240427 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.240826 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295003 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle\") pod \"1f680f7d-76bb-484d-9158-906647eb8e6f\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295129 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295520 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data\") pod \"1f680f7d-76bb-484d-9158-906647eb8e6f\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295676 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom\") pod \"1f680f7d-76bb-484d-9158-906647eb8e6f\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295728 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8s7m\" (UniqueName: \"kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m\") pod \"1f680f7d-76bb-484d-9158-906647eb8e6f\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295776 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs\") pod \"1f680f7d-76bb-484d-9158-906647eb8e6f\" (UID: \"1f680f7d-76bb-484d-9158-906647eb8e6f\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295910 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295965 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.295996 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76vm7\" (UniqueName: \"kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.296052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle\") pod \"b38f6e06-429d-45ad-b368-d1d6c639619b\" (UID: \"b38f6e06-429d-45ad-b368-d1d6c639619b\") " Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.297488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs" (OuterVolumeSpecName: "logs") pod "1f680f7d-76bb-484d-9158-906647eb8e6f" (UID: "1f680f7d-76bb-484d-9158-906647eb8e6f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.301131 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.303182 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts" (OuterVolumeSpecName: "scripts") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.304179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f680f7d-76bb-484d-9158-906647eb8e6f" (UID: "1f680f7d-76bb-484d-9158-906647eb8e6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.305192 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.307091 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7" (OuterVolumeSpecName: "kube-api-access-76vm7") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "kube-api-access-76vm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.312351 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m" (OuterVolumeSpecName: "kube-api-access-f8s7m") pod "1f680f7d-76bb-484d-9158-906647eb8e6f" (UID: "1f680f7d-76bb-484d-9158-906647eb8e6f"). InnerVolumeSpecName "kube-api-access-f8s7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.333649 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f680f7d-76bb-484d-9158-906647eb8e6f" (UID: "1f680f7d-76bb-484d-9158-906647eb8e6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.397637 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.403741 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data" (OuterVolumeSpecName: "config-data") pod "1f680f7d-76bb-484d-9158-906647eb8e6f" (UID: "1f680f7d-76bb-484d-9158-906647eb8e6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409046 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76vm7\" (UniqueName: \"kubernetes.io/projected/b38f6e06-429d-45ad-b368-d1d6c639619b-kube-api-access-76vm7\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409080 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409090 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409100 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409123 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409134 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f680f7d-76bb-484d-9158-906647eb8e6f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409142 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8s7m\" (UniqueName: \"kubernetes.io/projected/1f680f7d-76bb-484d-9158-906647eb8e6f-kube-api-access-f8s7m\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409150 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f680f7d-76bb-484d-9158-906647eb8e6f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409159 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b38f6e06-429d-45ad-b368-d1d6c639619b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.409167 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.441399 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data" (OuterVolumeSpecName: "config-data") pod "b38f6e06-429d-45ad-b368-d1d6c639619b" (UID: "b38f6e06-429d-45ad-b368-d1d6c639619b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.511455 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38f6e06-429d-45ad-b368-d1d6c639619b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.550848 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551333 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551355 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551378 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="dnsmasq-dns" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551387 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="dnsmasq-dns" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551407 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="cinder-scheduler" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551415 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="cinder-scheduler" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551429 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="init" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551437 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="init" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551452 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api-log" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551459 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api-log" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.551476 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="probe" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551483 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="probe" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551693 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dc22f3-d546-420e-96d0-103b9b9607d5" containerName="dnsmasq-dns" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551720 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551741 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="probe" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551749 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api-log" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.551762 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" containerName="cinder-scheduler" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.552872 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.555788 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.556104 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.556404 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.567398 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.592553 4813 generic.go:334] "Generic (PLEG): container finished" podID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerID="49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a" exitCode=0 Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.592860 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerDied","Data":"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a"} Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.592971 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" event={"ID":"1f680f7d-76bb-484d-9158-906647eb8e6f","Type":"ContainerDied","Data":"d3a9e8a466cb9f8bbb059dfd4158e0a8b3cc48b01d678e78b99a7d46e0a36718"} Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.593040 4813 scope.go:117] "RemoveContainer" containerID="49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.593221 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.597285 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b38f6e06-429d-45ad-b368-d1d6c639619b","Type":"ContainerDied","Data":"3b9527b0f7a2e154e0dd37d40ed4e87ef756bba5e490c08edf0395ac0066b704"} Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.597413 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.635439 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.644698 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-84dbdc9b7b-ttpgj"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.656130 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.657011 4813 scope.go:117] "RemoveContainer" containerID="ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.665446 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.682284 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.685599 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.685875 4813 scope.go:117] "RemoveContainer" containerID="49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.686319 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a\": container with ID starting with 49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a not found: ID does not exist" containerID="49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.686368 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a"} err="failed to get container status \"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a\": rpc error: code = NotFound desc = could not find container \"49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a\": container with ID starting with 49cc50341b361036d789eb3232acc794619305828830f4a203c6bac7717c072a not found: ID does not exist" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.686400 4813 scope.go:117] "RemoveContainer" containerID="ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d" Jan 29 16:56:36 crc kubenswrapper[4813]: E0129 16:56:36.686974 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d\": container with ID starting with ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d not found: ID does not exist" containerID="ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.687005 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d"} err="failed to get container status \"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d\": rpc error: code = NotFound desc = could not find container \"ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d\": container with ID starting with ca48dc20615fe4c31737fddbb81a0a21d238b7f8aed09512fa10de5ba51e551d not found: ID does not exist" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.687029 4813 scope.go:117] "RemoveContainer" containerID="597e6f46380ef38ac2bcadbe64956e102b8328ca5be05feb12c2604ede18b713" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.687494 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.698264 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.725004 4813 scope.go:117] "RemoveContainer" containerID="097373cdf5d1b91fd692661a385eac765660eaac7045388ac48a76091664fe60" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.726077 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.726197 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.726220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.727682 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.728154 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g2cs\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.728338 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.728454 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.728479 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830570 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830646 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830700 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830777 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830836 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830853 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.830988 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x69c\" (UniqueName: \"kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.831012 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g2cs\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.831032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.831059 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.831089 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.831948 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.832221 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.836011 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.837415 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.838033 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.839245 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.853628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.868258 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g2cs\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs\") pod \"swift-proxy-58968d868f-mvfqv\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.880756 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.933009 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x69c\" (UniqueName: \"kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.933672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.934386 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.934505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.934789 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.934950 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.938800 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.938885 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.939336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.940545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.944366 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:36 crc kubenswrapper[4813]: I0129 16:56:36.960275 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x69c\" (UniqueName: \"kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c\") pod \"cinder-scheduler-0\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " pod="openstack/cinder-scheduler-0" Jan 29 16:56:37 crc kubenswrapper[4813]: I0129 16:56:37.005997 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:56:37 crc kubenswrapper[4813]: W0129 16:56:37.491247 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620cdd0f_d89f_4197_90e1_d1f17f4fd7f7.slice/crio-4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b WatchSource:0}: Error finding container 4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b: Status 404 returned error can't find the container with id 4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b Jan 29 16:56:37 crc kubenswrapper[4813]: I0129 16:56:37.491648 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:56:37 crc kubenswrapper[4813]: I0129 16:56:37.611276 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerStarted","Data":"4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b"} Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.016537 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.017045 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="sg-core" containerID="cri-o://2f73ad869d0472c0958e52b7dac50b0b50e4069aa336a23f9f1d5fd38dc9a352" gracePeriod=30 Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.017057 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-notification-agent" containerID="cri-o://d737f93631b272e891889e0e44614ddd00cd536a026287162eeb195058391768" gracePeriod=30 Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.016898 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-central-agent" containerID="cri-o://0d8366af612144e2a5062885647307bf2802910e6a7a44b00c4f431b834bfdec" gracePeriod=30 Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.260010 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" path="/var/lib/kubelet/pods/1f680f7d-76bb-484d-9158-906647eb8e6f/volumes" Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.261549 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38f6e06-429d-45ad-b368-d1d6c639619b" path="/var/lib/kubelet/pods/b38f6e06-429d-45ad-b368-d1d6c639619b/volumes" Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.625931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerStarted","Data":"96ce0e98214ecc3dca853a25bbf06658a39336d51fa186c97aff7d03e5d42077"} Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.628820 4813 generic.go:334] "Generic (PLEG): container finished" podID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerID="2f73ad869d0472c0958e52b7dac50b0b50e4069aa336a23f9f1d5fd38dc9a352" exitCode=2 Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.628857 4813 generic.go:334] "Generic (PLEG): container finished" podID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerID="0d8366af612144e2a5062885647307bf2802910e6a7a44b00c4f431b834bfdec" exitCode=0 Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.628875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerDied","Data":"2f73ad869d0472c0958e52b7dac50b0b50e4069aa336a23f9f1d5fd38dc9a352"} Jan 29 16:56:38 crc kubenswrapper[4813]: I0129 16:56:38.628893 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerDied","Data":"0d8366af612144e2a5062885647307bf2802910e6a7a44b00c4f431b834bfdec"} Jan 29 16:56:39 crc kubenswrapper[4813]: I0129 16:56:39.107869 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 16:56:41 crc kubenswrapper[4813]: I0129 16:56:41.153006 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:56:41 crc kubenswrapper[4813]: I0129 16:56:41.153103 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84dbdc9b7b-ttpgj" podUID="1f680f7d-76bb-484d-9158-906647eb8e6f" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:56:42 crc kubenswrapper[4813]: I0129 16:56:42.258462 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 16:56:42 crc kubenswrapper[4813]: I0129 16:56:42.675582 4813 generic.go:334] "Generic (PLEG): container finished" podID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerID="d737f93631b272e891889e0e44614ddd00cd536a026287162eeb195058391768" exitCode=0 Jan 29 16:56:42 crc kubenswrapper[4813]: I0129 16:56:42.675714 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerDied","Data":"d737f93631b272e891889e0e44614ddd00cd536a026287162eeb195058391768"} Jan 29 16:56:44 crc kubenswrapper[4813]: W0129 16:56:44.491549 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f2ea2fe_60d7_41f2_bc76_9c58e892bc59.slice/crio-9b61897c69434cc93df241e9a72dc9a553c59738c74c8b9c21283221e1f8a8d0 WatchSource:0}: Error finding container 9b61897c69434cc93df241e9a72dc9a553c59738c74c8b9c21283221e1f8a8d0: Status 404 returned error can't find the container with id 9b61897c69434cc93df241e9a72dc9a553c59738c74c8b9c21283221e1f8a8d0 Jan 29 16:56:44 crc kubenswrapper[4813]: I0129 16:56:44.718956 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerStarted","Data":"9b61897c69434cc93df241e9a72dc9a553c59738c74c8b9c21283221e1f8a8d0"} Jan 29 16:56:44 crc kubenswrapper[4813]: I0129 16:56:44.914754 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026223 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvtpr\" (UniqueName: \"kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026582 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026668 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026752 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.026789 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle\") pod \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\" (UID: \"e6e8842a-9f0c-493c-a37c-581d1626a1ec\") " Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.027275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.027561 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.032004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr" (OuterVolumeSpecName: "kube-api-access-qvtpr") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "kube-api-access-qvtpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.032897 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.037320 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts" (OuterVolumeSpecName: "scripts") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.052661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.086743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.089943 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data" (OuterVolumeSpecName: "config-data") pod "e6e8842a-9f0c-493c-a37c-581d1626a1ec" (UID: "e6e8842a-9f0c-493c-a37c-581d1626a1ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.128706 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.128892 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6e8842a-9f0c-493c-a37c-581d1626a1ec-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.128967 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.129040 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvtpr\" (UniqueName: \"kubernetes.io/projected/e6e8842a-9f0c-493c-a37c-581d1626a1ec-kube-api-access-qvtpr\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.129104 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.129176 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e8842a-9f0c-493c-a37c-581d1626a1ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.740453 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"124e029a-6d27-4b49-830c-4be46fc186cc","Type":"ContainerStarted","Data":"6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0"} Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.748248 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6e8842a-9f0c-493c-a37c-581d1626a1ec","Type":"ContainerDied","Data":"7f0e072d7ee4d6d829eb93e9cbadb1b820fc6eb23d654275ae0c691d8f936741"} Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.748287 4813 scope.go:117] "RemoveContainer" containerID="2f73ad869d0472c0958e52b7dac50b0b50e4069aa336a23f9f1d5fd38dc9a352" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.748318 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.755618 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerStarted","Data":"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb"} Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.775424 4813 scope.go:117] "RemoveContainer" containerID="d737f93631b272e891889e0e44614ddd00cd536a026287162eeb195058391768" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.817175 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.826638 4813 scope.go:117] "RemoveContainer" containerID="0d8366af612144e2a5062885647307bf2802910e6a7a44b00c4f431b834bfdec" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.870335 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.878402 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:45 crc kubenswrapper[4813]: E0129 16:56:45.878775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-notification-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.878793 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-notification-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: E0129 16:56:45.878813 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="sg-core" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.878819 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="sg-core" Jan 29 16:56:45 crc kubenswrapper[4813]: E0129 16:56:45.878837 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-central-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.878843 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-central-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.879001 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-central-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.879013 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="sg-core" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.879027 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" containerName="ceilometer-notification-agent" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.880512 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.883363 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.884035 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:56:45 crc kubenswrapper[4813]: I0129 16:56:45.886574 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.058960 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059017 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059238 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059268 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059386 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.059426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.160866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.160915 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.160991 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161025 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161102 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161151 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161216 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161587 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.161701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.167541 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.168672 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.174365 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.178438 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.182919 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2\") pod \"ceilometer-0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.205589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.253272 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e8842a-9f0c-493c-a37c-581d1626a1ec" path="/var/lib/kubelet/pods/e6e8842a-9f0c-493c-a37c-581d1626a1ec/volumes" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.739595 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:56:46 crc kubenswrapper[4813]: W0129 16:56:46.742495 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8180da1_49c9_4fe1_9ac5_cdcec262a8d0.slice/crio-a1cb56da282b883ff011248fe9e95b18c4f85d9a54f0cc31a5831876b8fb8a76 WatchSource:0}: Error finding container a1cb56da282b883ff011248fe9e95b18c4f85d9a54f0cc31a5831876b8fb8a76: Status 404 returned error can't find the container with id a1cb56da282b883ff011248fe9e95b18c4f85d9a54f0cc31a5831876b8fb8a76 Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.767837 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerStarted","Data":"a1cb56da282b883ff011248fe9e95b18c4f85d9a54f0cc31a5831876b8fb8a76"} Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.771219 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.771344 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.775865 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerStarted","Data":"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a"} Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.776051 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.776093 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.779975 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerStarted","Data":"b2561ed4f6f62f8ecb439c1fcdc29a4178bfc26d6202bdfabc617cd3c474d99c"} Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.893308 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58968d868f-mvfqv" podStartSLOduration=10.893288566 podStartE2EDuration="10.893288566s" podCreationTimestamp="2026-01-29 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:46.8654742 +0000 UTC m=+1659.352677416" watchObservedRunningTime="2026-01-29 16:56:46.893288566 +0000 UTC m=+1659.380491782" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.893697 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.893693405 podStartE2EDuration="10.893693405s" podCreationTimestamp="2026-01-29 16:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:56:46.843322811 +0000 UTC m=+1659.330526027" watchObservedRunningTime="2026-01-29 16:56:46.893693405 +0000 UTC m=+1659.380896621" Jan 29 16:56:46 crc kubenswrapper[4813]: I0129 16:56:46.906481 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.256439405 podStartE2EDuration="15.906457342s" podCreationTimestamp="2026-01-29 16:56:31 +0000 UTC" firstStartedPulling="2026-01-29 16:56:32.393529022 +0000 UTC m=+1644.880732238" lastFinishedPulling="2026-01-29 16:56:45.043546959 +0000 UTC m=+1657.530750175" observedRunningTime="2026-01-29 16:56:46.88413627 +0000 UTC m=+1659.371339486" watchObservedRunningTime="2026-01-29 16:56:46.906457342 +0000 UTC m=+1659.393660558" Jan 29 16:56:47 crc kubenswrapper[4813]: I0129 16:56:47.007552 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 16:56:47 crc kubenswrapper[4813]: I0129 16:56:47.008783 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.171:8080/\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 29 16:56:48 crc kubenswrapper[4813]: I0129 16:56:48.825469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerStarted","Data":"71a3d3410b9d283f714cb42ce5149b7699c33fa384e7e979960158a6af65405e"} Jan 29 16:56:49 crc kubenswrapper[4813]: I0129 16:56:49.240502 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:56:49 crc kubenswrapper[4813]: E0129 16:56:49.240820 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:56:51 crc kubenswrapper[4813]: I0129 16:56:51.889904 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:51 crc kubenswrapper[4813]: I0129 16:56:51.891705 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 16:56:52 crc kubenswrapper[4813]: I0129 16:56:52.859831 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerStarted","Data":"735355b6600c4f7c0b1b30c66795884cf754d37ea96a9723bad701667dcb8bf0"} Jan 29 16:56:53 crc kubenswrapper[4813]: I0129 16:56:53.532663 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 16:56:58 crc kubenswrapper[4813]: E0129 16:56:58.733360 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:56:58 crc kubenswrapper[4813]: E0129 16:56:58.734207 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t7v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8180da1-49c9-4fe1-9ac5-cdcec262a8d0): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:56:58 crc kubenswrapper[4813]: E0129 16:56:58.735352 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" Jan 29 16:56:58 crc kubenswrapper[4813]: I0129 16:56:58.924590 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerStarted","Data":"419addf8170ea70f96edf5cd043ba5f5fe27b8a54918d27ff07058b47ac2bad4"} Jan 29 16:56:58 crc kubenswrapper[4813]: E0129 16:56:58.926489 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" Jan 29 16:56:59 crc kubenswrapper[4813]: E0129 16:56:59.937231 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" Jan 29 16:57:02 crc kubenswrapper[4813]: E0129 16:57:02.292575 4813 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bf146e2_d6a8_4c27_a3f5_ec80641f6017.slice/crio-conmon-1d6227c2caf355a9f0fbf758a68ba38abb2b953c39c33eb99281e58b465e18ff.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:57:02 crc kubenswrapper[4813]: I0129 16:57:02.963523 4813 generic.go:334] "Generic (PLEG): container finished" podID="1bf146e2-d6a8-4c27-a3f5-ec80641f6017" containerID="1d6227c2caf355a9f0fbf758a68ba38abb2b953c39c33eb99281e58b465e18ff" exitCode=0 Jan 29 16:57:02 crc kubenswrapper[4813]: I0129 16:57:02.963588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4tc6w" event={"ID":"1bf146e2-d6a8-4c27-a3f5-ec80641f6017","Type":"ContainerDied","Data":"1d6227c2caf355a9f0fbf758a68ba38abb2b953c39c33eb99281e58b465e18ff"} Jan 29 16:57:03 crc kubenswrapper[4813]: I0129 16:57:03.239493 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:57:03 crc kubenswrapper[4813]: E0129 16:57:03.239969 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.350431 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.412987 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle\") pod \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.413688 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvd4r\" (UniqueName: \"kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r\") pod \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.413854 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config\") pod \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\" (UID: \"1bf146e2-d6a8-4c27-a3f5-ec80641f6017\") " Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.420528 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r" (OuterVolumeSpecName: "kube-api-access-nvd4r") pod "1bf146e2-d6a8-4c27-a3f5-ec80641f6017" (UID: "1bf146e2-d6a8-4c27-a3f5-ec80641f6017"). InnerVolumeSpecName "kube-api-access-nvd4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.442833 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf146e2-d6a8-4c27-a3f5-ec80641f6017" (UID: "1bf146e2-d6a8-4c27-a3f5-ec80641f6017"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.445412 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config" (OuterVolumeSpecName: "config") pod "1bf146e2-d6a8-4c27-a3f5-ec80641f6017" (UID: "1bf146e2-d6a8-4c27-a3f5-ec80641f6017"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.516437 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.516468 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvd4r\" (UniqueName: \"kubernetes.io/projected/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-kube-api-access-nvd4r\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.516479 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1bf146e2-d6a8-4c27-a3f5-ec80641f6017-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.983430 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4tc6w" event={"ID":"1bf146e2-d6a8-4c27-a3f5-ec80641f6017","Type":"ContainerDied","Data":"1f908de25adc5699d3348704d9efcf231680e54f6c2380a10b1a07b221716859"} Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.983469 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f908de25adc5699d3348704d9efcf231680e54f6c2380a10b1a07b221716859" Jan 29 16:57:04 crc kubenswrapper[4813]: I0129 16:57:04.983518 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4tc6w" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.237553 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:57:05 crc kubenswrapper[4813]: E0129 16:57:05.238050 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf146e2-d6a8-4c27-a3f5-ec80641f6017" containerName="neutron-db-sync" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.238075 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf146e2-d6a8-4c27-a3f5-ec80641f6017" containerName="neutron-db-sync" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.238303 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf146e2-d6a8-4c27-a3f5-ec80641f6017" containerName="neutron-db-sync" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.239682 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.281679 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.330535 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sqn\" (UniqueName: \"kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.330690 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.330783 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.330887 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.330985 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.331032 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.357863 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.361356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.368678 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.371608 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.371890 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.374431 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9mxwt" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.389222 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.436840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sqn\" (UniqueName: \"kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.436934 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.436971 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.436999 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.437052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.437095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlt92\" (UniqueName: \"kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.438009 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.438097 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.438135 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.438204 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.438801 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.440143 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.440914 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.440988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.441658 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.442482 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.465734 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sqn\" (UniqueName: \"kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn\") pod \"dnsmasq-dns-75dbb546bf-dbbcf\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.541342 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlt92\" (UniqueName: \"kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.541464 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.541505 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.541551 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.541574 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.547652 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.547745 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.548014 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.558729 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.560570 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlt92\" (UniqueName: \"kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92\") pod \"neutron-798d457854-wxskz\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.567096 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:05 crc kubenswrapper[4813]: I0129 16:57:05.710721 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.076930 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.210543 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wksqv"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.212226 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.229751 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wksqv"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.282272 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.282348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc4z8\" (UniqueName: \"kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.308924 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4sxkl"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.310967 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.322487 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4sxkl"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.385355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpkv\" (UniqueName: \"kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.385445 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.385513 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc4z8\" (UniqueName: \"kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.385670 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.386510 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.421799 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc4z8\" (UniqueName: \"kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8\") pod \"nova-api-db-create-wksqv\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.442191 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f5qpr"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.454695 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.468871 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.485452 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5b8c-account-create-update-wq4nd"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.487432 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.489985 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.490056 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmpkv\" (UniqueName: \"kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.490328 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.491318 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.508218 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5b8c-account-create-update-wq4nd"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.522858 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f5qpr"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.531223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmpkv\" (UniqueName: \"kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv\") pod \"nova-cell0-db-create-4sxkl\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.600379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.600466 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.600525 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9sd\" (UniqueName: \"kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.600639 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcdr\" (UniqueName: \"kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.613090 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.637712 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6699-account-create-update-gsbdz"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.641741 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.645809 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.648784 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.669227 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6699-account-create-update-gsbdz"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.705419 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.707839 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9sd\" (UniqueName: \"kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.709175 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcdr\" (UniqueName: \"kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.709269 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.710300 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.717755 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.744438 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcdr\" (UniqueName: \"kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr\") pod \"nova-cell1-db-create-f5qpr\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.745983 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9sd\" (UniqueName: \"kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd\") pod \"nova-api-5b8c-account-create-update-wq4nd\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.793261 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.811523 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6gr\" (UniqueName: \"kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.811794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.822406 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.822791 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-043c-account-create-update-m4hzv"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.824201 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.827479 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.846702 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-043c-account-create-update-m4hzv"] Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.913893 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.914050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6gr\" (UniqueName: \"kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.914222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.914595 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sf6t\" (UniqueName: \"kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.915184 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:06 crc kubenswrapper[4813]: I0129 16:57:06.942407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6gr\" (UniqueName: \"kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr\") pod \"nova-cell0-6699-account-create-update-gsbdz\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.017717 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.017802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sf6t\" (UniqueName: \"kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.023760 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.027803 4813 generic.go:334] "Generic (PLEG): container finished" podID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerID="8bcb5096e1c226b23a5e530360f64d61c566d065e4957348e0f63fe2d9ec9afb" exitCode=0 Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.027889 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" event={"ID":"5eca4c37-f9b2-4941-95da-46b351fb6616","Type":"ContainerDied","Data":"8bcb5096e1c226b23a5e530360f64d61c566d065e4957348e0f63fe2d9ec9afb"} Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.027920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" event={"ID":"5eca4c37-f9b2-4941-95da-46b351fb6616","Type":"ContainerStarted","Data":"df3cad68c6016399543c3eb072369d1de26adcdf58927b051c7f6f9ed443a35d"} Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.030732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerStarted","Data":"1e5b623683f6cb7ba9c5493b668c94cddb77f58344b1409daac03e8d2ad6df61"} Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.030776 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerStarted","Data":"d4443c88983298bdede1db56fc1ad3cb7d06028cf347c58fca9754986e35af23"} Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.050781 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sf6t\" (UniqueName: \"kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t\") pod \"nova-cell1-043c-account-create-update-m4hzv\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.138339 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.149597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:07 crc kubenswrapper[4813]: W0129 16:57:07.309137 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3dcd69_3478_4b64_86b4_9d5b22b803c8.slice/crio-511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f WatchSource:0}: Error finding container 511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f: Status 404 returned error can't find the container with id 511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.330595 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wksqv"] Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.460214 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4sxkl"] Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.548484 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5b8c-account-create-update-wq4nd"] Jan 29 16:57:07 crc kubenswrapper[4813]: W0129 16:57:07.565357 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ee298b_a2bb_4f72_874c_6a0a27a56d9a.slice/crio-51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f WatchSource:0}: Error finding container 51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f: Status 404 returned error can't find the container with id 51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.568191 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f5qpr"] Jan 29 16:57:07 crc kubenswrapper[4813]: W0129 16:57:07.590278 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40a433a1_cd64_432a_ac5a_1d8367a3a723.slice/crio-ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc WatchSource:0}: Error finding container ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc: Status 404 returned error can't find the container with id ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc Jan 29 16:57:07 crc kubenswrapper[4813]: W0129 16:57:07.855743 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ef5112b_ed61_4433_9339_a6bd16e11462.slice/crio-bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948 WatchSource:0}: Error finding container bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948: Status 404 returned error can't find the container with id bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948 Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.856938 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-043c-account-create-update-m4hzv"] Jan 29 16:57:07 crc kubenswrapper[4813]: I0129 16:57:07.875046 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6699-account-create-update-gsbdz"] Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.043311 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4sxkl" event={"ID":"5fdd976a-96f2-4c75-bab4-b557d5c6c025","Type":"ContainerStarted","Data":"ae72b67c59fdc47344232a2d2ed641205d0d85d9ca603f13a401808d975fecc5"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.045806 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" event={"ID":"5eca4c37-f9b2-4941-95da-46b351fb6616","Type":"ContainerStarted","Data":"102440e0d37d695a951d8334372eb2aacd50d381aa2bdfa36a20872e2dd460df"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.047125 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.051710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" event={"ID":"1ef5112b-ed61-4433-9339-a6bd16e11462","Type":"ContainerStarted","Data":"bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.057375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerStarted","Data":"0e267593dfe5d86281e238fd74b2f732badf00023a2da0789f38bd552593e52d"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.058170 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.061595 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wksqv" event={"ID":"cb3dcd69-3478-4b64-86b4-9d5b22b803c8","Type":"ContainerStarted","Data":"511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.065130 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" event={"ID":"d484b71c-e076-43d8-ac63-afe47f877f98","Type":"ContainerStarted","Data":"9978237520f0a7014607c2c1f7d5fb263f82130acf734a166a3ae6c653b2307f"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.069293 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" event={"ID":"81ee298b-a2bb-4f72-874c-6a0a27a56d9a","Type":"ContainerStarted","Data":"51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.071420 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" podStartSLOduration=3.07140179 podStartE2EDuration="3.07140179s" podCreationTimestamp="2026-01-29 16:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:08.065675701 +0000 UTC m=+1680.552878917" watchObservedRunningTime="2026-01-29 16:57:08.07140179 +0000 UTC m=+1680.558605006" Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.077411 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f5qpr" event={"ID":"40a433a1-cd64-432a-ac5a-1d8367a3a723","Type":"ContainerStarted","Data":"ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc"} Jan 29 16:57:08 crc kubenswrapper[4813]: I0129 16:57:08.095140 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-798d457854-wxskz" podStartSLOduration=3.095103943 podStartE2EDuration="3.095103943s" podCreationTimestamp="2026-01-29 16:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:08.086692654 +0000 UTC m=+1680.573895870" watchObservedRunningTime="2026-01-29 16:57:08.095103943 +0000 UTC m=+1680.582307159" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.094326 4813 generic.go:334] "Generic (PLEG): container finished" podID="d484b71c-e076-43d8-ac63-afe47f877f98" containerID="8b8b67b8a39e84e66dc8143533661febf68e6872996e2632319803eba49bed81" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.094439 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" event={"ID":"d484b71c-e076-43d8-ac63-afe47f877f98","Type":"ContainerDied","Data":"8b8b67b8a39e84e66dc8143533661febf68e6872996e2632319803eba49bed81"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.103774 4813 generic.go:334] "Generic (PLEG): container finished" podID="81ee298b-a2bb-4f72-874c-6a0a27a56d9a" containerID="f6b4e2de6fc7c170ae3a44784a2550fe9843c7ca3c2cd3058e85cb5f72c2a9e2" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.103921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" event={"ID":"81ee298b-a2bb-4f72-874c-6a0a27a56d9a","Type":"ContainerDied","Data":"f6b4e2de6fc7c170ae3a44784a2550fe9843c7ca3c2cd3058e85cb5f72c2a9e2"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.111602 4813 generic.go:334] "Generic (PLEG): container finished" podID="40a433a1-cd64-432a-ac5a-1d8367a3a723" containerID="b67fdc4d54a98668ffd876dfecd0de8419e745636df96be1c039e05ff0e43319" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.111758 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f5qpr" event={"ID":"40a433a1-cd64-432a-ac5a-1d8367a3a723","Type":"ContainerDied","Data":"b67fdc4d54a98668ffd876dfecd0de8419e745636df96be1c039e05ff0e43319"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.126353 4813 generic.go:334] "Generic (PLEG): container finished" podID="5fdd976a-96f2-4c75-bab4-b557d5c6c025" containerID="9180461d3ca1b31da8c2b5ffd77c0afcc94c78f3c213b2a804f76cf6152efa84" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.126517 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4sxkl" event={"ID":"5fdd976a-96f2-4c75-bab4-b557d5c6c025","Type":"ContainerDied","Data":"9180461d3ca1b31da8c2b5ffd77c0afcc94c78f3c213b2a804f76cf6152efa84"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.131187 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ef5112b-ed61-4433-9339-a6bd16e11462" containerID="d1421a6327e7e180010e12d8787ac24c190372a9071611b0e1315d40e3b4fb96" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.131274 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" event={"ID":"1ef5112b-ed61-4433-9339-a6bd16e11462","Type":"ContainerDied","Data":"d1421a6327e7e180010e12d8787ac24c190372a9071611b0e1315d40e3b4fb96"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.138066 4813 generic.go:334] "Generic (PLEG): container finished" podID="cb3dcd69-3478-4b64-86b4-9d5b22b803c8" containerID="57f41b96c4e3c37aa561663bfe83546f5132f6cf34321eb2021fd30cbc286d2c" exitCode=0 Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.138448 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wksqv" event={"ID":"cb3dcd69-3478-4b64-86b4-9d5b22b803c8","Type":"ContainerDied","Data":"57f41b96c4e3c37aa561663bfe83546f5132f6cf34321eb2021fd30cbc286d2c"} Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.346518 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.348504 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.351399 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.351728 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.384959 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407409 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkzg\" (UniqueName: \"kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407601 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407632 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407692 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407750 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.407784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509380 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509432 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509539 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509565 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509657 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.509694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fkzg\" (UniqueName: \"kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.515740 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.515963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.517844 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.518697 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.519329 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.528076 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.535845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fkzg\" (UniqueName: \"kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg\") pod \"neutron-886657b65-p552j\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:09 crc kubenswrapper[4813]: I0129 16:57:09.670222 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.393436 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.644655 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.839995 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk9sd\" (UniqueName: \"kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd\") pod \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.840073 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts\") pod \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\" (UID: \"81ee298b-a2bb-4f72-874c-6a0a27a56d9a\") " Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.840700 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81ee298b-a2bb-4f72-874c-6a0a27a56d9a" (UID: "81ee298b-a2bb-4f72-874c-6a0a27a56d9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.848590 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd" (OuterVolumeSpecName: "kube-api-access-jk9sd") pod "81ee298b-a2bb-4f72-874c-6a0a27a56d9a" (UID: "81ee298b-a2bb-4f72-874c-6a0a27a56d9a"). InnerVolumeSpecName "kube-api-access-jk9sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.942755 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.942793 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk9sd\" (UniqueName: \"kubernetes.io/projected/81ee298b-a2bb-4f72-874c-6a0a27a56d9a-kube-api-access-jk9sd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.983842 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:10 crc kubenswrapper[4813]: I0129 16:57:10.990480 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.016918 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.039875 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.051253 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.145813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc4z8\" (UniqueName: \"kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8\") pod \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.145917 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts\") pod \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\" (UID: \"cb3dcd69-3478-4b64-86b4-9d5b22b803c8\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.145951 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tcdr\" (UniqueName: \"kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr\") pod \"40a433a1-cd64-432a-ac5a-1d8367a3a723\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146160 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts\") pod \"40a433a1-cd64-432a-ac5a-1d8367a3a723\" (UID: \"40a433a1-cd64-432a-ac5a-1d8367a3a723\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146209 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts\") pod \"d484b71c-e076-43d8-ac63-afe47f877f98\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146306 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x6gr\" (UniqueName: \"kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr\") pod \"d484b71c-e076-43d8-ac63-afe47f877f98\" (UID: \"d484b71c-e076-43d8-ac63-afe47f877f98\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts\") pod \"1ef5112b-ed61-4433-9339-a6bd16e11462\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb3dcd69-3478-4b64-86b4-9d5b22b803c8" (UID: "cb3dcd69-3478-4b64-86b4-9d5b22b803c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146633 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sf6t\" (UniqueName: \"kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t\") pod \"1ef5112b-ed61-4433-9339-a6bd16e11462\" (UID: \"1ef5112b-ed61-4433-9339-a6bd16e11462\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.146814 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40a433a1-cd64-432a-ac5a-1d8367a3a723" (UID: "40a433a1-cd64-432a-ac5a-1d8367a3a723"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.148517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d484b71c-e076-43d8-ac63-afe47f877f98" (UID: "d484b71c-e076-43d8-ac63-afe47f877f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.148870 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.148898 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40a433a1-cd64-432a-ac5a-1d8367a3a723-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.148910 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d484b71c-e076-43d8-ac63-afe47f877f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.149326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ef5112b-ed61-4433-9339-a6bd16e11462" (UID: "1ef5112b-ed61-4433-9339-a6bd16e11462"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.151592 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8" (OuterVolumeSpecName: "kube-api-access-pc4z8") pod "cb3dcd69-3478-4b64-86b4-9d5b22b803c8" (UID: "cb3dcd69-3478-4b64-86b4-9d5b22b803c8"). InnerVolumeSpecName "kube-api-access-pc4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.153021 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr" (OuterVolumeSpecName: "kube-api-access-6x6gr") pod "d484b71c-e076-43d8-ac63-afe47f877f98" (UID: "d484b71c-e076-43d8-ac63-afe47f877f98"). InnerVolumeSpecName "kube-api-access-6x6gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.155373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr" (OuterVolumeSpecName: "kube-api-access-2tcdr") pod "40a433a1-cd64-432a-ac5a-1d8367a3a723" (UID: "40a433a1-cd64-432a-ac5a-1d8367a3a723"). InnerVolumeSpecName "kube-api-access-2tcdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.155524 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t" (OuterVolumeSpecName: "kube-api-access-5sf6t") pod "1ef5112b-ed61-4433-9339-a6bd16e11462" (UID: "1ef5112b-ed61-4433-9339-a6bd16e11462"). InnerVolumeSpecName "kube-api-access-5sf6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.163025 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wksqv" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.163011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wksqv" event={"ID":"cb3dcd69-3478-4b64-86b4-9d5b22b803c8","Type":"ContainerDied","Data":"511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.163266 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511882f6cce129f5dcc0f5d45b6b1ce77297cbab132e236fe455def688c39a0f" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.165235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" event={"ID":"d484b71c-e076-43d8-ac63-afe47f877f98","Type":"ContainerDied","Data":"9978237520f0a7014607c2c1f7d5fb263f82130acf734a166a3ae6c653b2307f"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.165286 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9978237520f0a7014607c2c1f7d5fb263f82130acf734a166a3ae6c653b2307f" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.165451 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6699-account-create-update-gsbdz" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.168142 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.171689 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5b8c-account-create-update-wq4nd" event={"ID":"81ee298b-a2bb-4f72-874c-6a0a27a56d9a","Type":"ContainerDied","Data":"51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.171763 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e8c7e491a6b7cb07c4bb19166c6d527f9a3be48f9fb32b76c0a169f3a1818f" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.183435 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerStarted","Data":"5aa28b5649ace6006edcf7ca887bd1062f191ab08030423dc4e8ee65bc52dfff"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.183534 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.183551 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerStarted","Data":"0e1eabbbaa63c5764c44a582ebc5008e28c901ea880bd7fdcfd278bd736cbcb5"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.183571 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerStarted","Data":"65c57bc46dfce2ad6cad7709d3a9484b4422901b0fa3eb57e5fd4ac8b3114c65"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.187088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4sxkl" event={"ID":"5fdd976a-96f2-4c75-bab4-b557d5c6c025","Type":"ContainerDied","Data":"ae72b67c59fdc47344232a2d2ed641205d0d85d9ca603f13a401808d975fecc5"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.188682 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae72b67c59fdc47344232a2d2ed641205d0d85d9ca603f13a401808d975fecc5" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.191541 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4sxkl" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.194734 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f5qpr" event={"ID":"40a433a1-cd64-432a-ac5a-1d8367a3a723","Type":"ContainerDied","Data":"ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.194895 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac2dbebb6b305cf24165507165b5736898aa93b158b15fe2d103435e485078dc" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.195129 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f5qpr" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.196660 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" event={"ID":"1ef5112b-ed61-4433-9339-a6bd16e11462","Type":"ContainerDied","Data":"bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948"} Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.196708 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf328cd2f8deebf5ef9038f2a33a3d1e8a38b625f338dac039f9a406c5f5c948" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.196791 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-043c-account-create-update-m4hzv" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.251007 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts\") pod \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.251455 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fdd976a-96f2-4c75-bab4-b557d5c6c025" (UID: "5fdd976a-96f2-4c75-bab4-b557d5c6c025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.252081 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmpkv\" (UniqueName: \"kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv\") pod \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\" (UID: \"5fdd976a-96f2-4c75-bab4-b557d5c6c025\") " Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259776 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fdd976a-96f2-4c75-bab4-b557d5c6c025-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259812 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x6gr\" (UniqueName: \"kubernetes.io/projected/d484b71c-e076-43d8-ac63-afe47f877f98-kube-api-access-6x6gr\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259826 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef5112b-ed61-4433-9339-a6bd16e11462-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259839 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sf6t\" (UniqueName: \"kubernetes.io/projected/1ef5112b-ed61-4433-9339-a6bd16e11462-kube-api-access-5sf6t\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259853 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc4z8\" (UniqueName: \"kubernetes.io/projected/cb3dcd69-3478-4b64-86b4-9d5b22b803c8-kube-api-access-pc4z8\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.259864 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tcdr\" (UniqueName: \"kubernetes.io/projected/40a433a1-cd64-432a-ac5a-1d8367a3a723-kube-api-access-2tcdr\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.260092 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-886657b65-p552j" podStartSLOduration=2.260073278 podStartE2EDuration="2.260073278s" podCreationTimestamp="2026-01-29 16:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:11.211182317 +0000 UTC m=+1683.698385533" watchObservedRunningTime="2026-01-29 16:57:11.260073278 +0000 UTC m=+1683.747276494" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.264529 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv" (OuterVolumeSpecName: "kube-api-access-qmpkv") pod "5fdd976a-96f2-4c75-bab4-b557d5c6c025" (UID: "5fdd976a-96f2-4c75-bab4-b557d5c6c025"). InnerVolumeSpecName "kube-api-access-qmpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.361444 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmpkv\" (UniqueName: \"kubernetes.io/projected/5fdd976a-96f2-4c75-bab4-b557d5c6c025-kube-api-access-qmpkv\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.459425 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.459666 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-log" containerID="cri-o://44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92" gracePeriod=30 Jan 29 16:57:11 crc kubenswrapper[4813]: I0129 16:57:11.459788 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-httpd" containerID="cri-o://78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36" gracePeriod=30 Jan 29 16:57:12 crc kubenswrapper[4813]: I0129 16:57:12.194478 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:12 crc kubenswrapper[4813]: I0129 16:57:12.194999 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-log" containerID="cri-o://410866b4da0487fd569b786c3474b9455190956c89b7bae91ae7d9c87b59034e" gracePeriod=30 Jan 29 16:57:12 crc kubenswrapper[4813]: I0129 16:57:12.195188 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-httpd" containerID="cri-o://d1652c697171b4644384aad69873e83fc4dc25a9c4b63c4c79eced2b478dcb88" gracePeriod=30 Jan 29 16:57:12 crc kubenswrapper[4813]: I0129 16:57:12.208738 4813 generic.go:334] "Generic (PLEG): container finished" podID="d606996d-c2f9-4072-b026-d7594399dd75" containerID="44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92" exitCode=143 Jan 29 16:57:12 crc kubenswrapper[4813]: I0129 16:57:12.209728 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerDied","Data":"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92"} Jan 29 16:57:13 crc kubenswrapper[4813]: I0129 16:57:13.219967 4813 generic.go:334] "Generic (PLEG): container finished" podID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerID="410866b4da0487fd569b786c3474b9455190956c89b7bae91ae7d9c87b59034e" exitCode=143 Jan 29 16:57:13 crc kubenswrapper[4813]: I0129 16:57:13.220036 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerDied","Data":"410866b4da0487fd569b786c3474b9455190956c89b7bae91ae7d9c87b59034e"} Jan 29 16:57:13 crc kubenswrapper[4813]: E0129 16:57:13.428870 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:13 crc kubenswrapper[4813]: E0129 16:57:13.429057 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t7v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8180da1-49c9-4fe1-9ac5-cdcec262a8d0): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:13 crc kubenswrapper[4813]: E0129 16:57:13.430265 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" Jan 29 16:57:13 crc kubenswrapper[4813]: I0129 16:57:13.580327 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:14 crc kubenswrapper[4813]: I0129 16:57:14.243449 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-central-agent" containerID="cri-o://71a3d3410b9d283f714cb42ce5149b7699c33fa384e7e979960158a6af65405e" gracePeriod=30 Jan 29 16:57:14 crc kubenswrapper[4813]: I0129 16:57:14.243523 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-notification-agent" containerID="cri-o://735355b6600c4f7c0b1b30c66795884cf754d37ea96a9723bad701667dcb8bf0" gracePeriod=30 Jan 29 16:57:14 crc kubenswrapper[4813]: I0129 16:57:14.243702 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="sg-core" containerID="cri-o://419addf8170ea70f96edf5cd043ba5f5fe27b8a54918d27ff07058b47ac2bad4" gracePeriod=30 Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.207063 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.256541 4813 generic.go:334] "Generic (PLEG): container finished" podID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerID="419addf8170ea70f96edf5cd043ba5f5fe27b8a54918d27ff07058b47ac2bad4" exitCode=2 Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.256796 4813 generic.go:334] "Generic (PLEG): container finished" podID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerID="71a3d3410b9d283f714cb42ce5149b7699c33fa384e7e979960158a6af65405e" exitCode=0 Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.256846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerDied","Data":"419addf8170ea70f96edf5cd043ba5f5fe27b8a54918d27ff07058b47ac2bad4"} Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.256871 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerDied","Data":"71a3d3410b9d283f714cb42ce5149b7699c33fa384e7e979960158a6af65405e"} Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.262314 4813 generic.go:334] "Generic (PLEG): container finished" podID="d606996d-c2f9-4072-b026-d7594399dd75" containerID="78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36" exitCode=0 Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.262366 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerDied","Data":"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36"} Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.262402 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d606996d-c2f9-4072-b026-d7594399dd75","Type":"ContainerDied","Data":"8bb9b98f340a62f10ab807d41b3a3bc47cf2a032b744d15e10e105fa11e3ec75"} Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.262427 4813 scope.go:117] "RemoveContainer" containerID="78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.262483 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.316456 4813 scope.go:117] "RemoveContainer" containerID="44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44898\" (UniqueName: \"kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330615 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330649 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330798 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330875 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.330915 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.331000 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.331060 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data\") pod \"d606996d-c2f9-4072-b026-d7594399dd75\" (UID: \"d606996d-c2f9-4072-b026-d7594399dd75\") " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.331407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.332280 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.332650 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs" (OuterVolumeSpecName: "logs") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.337401 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts" (OuterVolumeSpecName: "scripts") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.360658 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.363597 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898" (OuterVolumeSpecName: "kube-api-access-44898") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "kube-api-access-44898". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.403188 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.427882 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data" (OuterVolumeSpecName: "config-data") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435038 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435069 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d606996d-c2f9-4072-b026-d7594399dd75-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435079 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435089 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44898\" (UniqueName: \"kubernetes.io/projected/d606996d-c2f9-4072-b026-d7594399dd75-kube-api-access-44898\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435101 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.435135 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.454621 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.456464 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d606996d-c2f9-4072-b026-d7594399dd75" (UID: "d606996d-c2f9-4072-b026-d7594399dd75"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.536425 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.536466 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d606996d-c2f9-4072-b026-d7594399dd75-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.570367 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.581863 4813 scope.go:117] "RemoveContainer" containerID="78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.582413 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36\": container with ID starting with 78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36 not found: ID does not exist" containerID="78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.582454 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36"} err="failed to get container status \"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36\": rpc error: code = NotFound desc = could not find container \"78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36\": container with ID starting with 78899f39f90a60f3299417150721817c03a59a3bdd2114e5913e2e4c8cbbef36 not found: ID does not exist" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.582481 4813 scope.go:117] "RemoveContainer" containerID="44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.582776 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92\": container with ID starting with 44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92 not found: ID does not exist" containerID="44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.582867 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92"} err="failed to get container status \"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92\": rpc error: code = NotFound desc = could not find container \"44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92\": container with ID starting with 44d39bd318c46cd408dff2a3edcf6f43109e614335e69953775b27319dd7aa92 not found: ID does not exist" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.628857 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.642802 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.672548 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.673048 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="dnsmasq-dns" containerID="cri-o://097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c" gracePeriod=10 Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.683186 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.683855 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-httpd" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.683952 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-httpd" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684039 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-log" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.684149 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-log" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684232 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d484b71c-e076-43d8-ac63-afe47f877f98" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.684307 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d484b71c-e076-43d8-ac63-afe47f877f98" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684410 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5112b-ed61-4433-9339-a6bd16e11462" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.684484 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5112b-ed61-4433-9339-a6bd16e11462" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684589 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ee298b-a2bb-4f72-874c-6a0a27a56d9a" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.684678 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ee298b-a2bb-4f72-874c-6a0a27a56d9a" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684770 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a433a1-cd64-432a-ac5a-1d8367a3a723" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.684844 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a433a1-cd64-432a-ac5a-1d8367a3a723" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.684938 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3dcd69-3478-4b64-86b4-9d5b22b803c8" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685023 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3dcd69-3478-4b64-86b4-9d5b22b803c8" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: E0129 16:57:15.685122 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd976a-96f2-4c75-bab4-b557d5c6c025" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685206 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd976a-96f2-4c75-bab4-b557d5c6c025" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685499 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd976a-96f2-4c75-bab4-b557d5c6c025" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685587 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d484b71c-e076-43d8-ac63-afe47f877f98" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685675 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-log" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685746 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ee298b-a2bb-4f72-874c-6a0a27a56d9a" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685825 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a433a1-cd64-432a-ac5a-1d8367a3a723" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.685918 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d606996d-c2f9-4072-b026-d7594399dd75" containerName="glance-httpd" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.686023 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef5112b-ed61-4433-9339-a6bd16e11462" containerName="mariadb-account-create-update" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.686130 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3dcd69-3478-4b64-86b4-9d5b22b803c8" containerName="mariadb-database-create" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.687477 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.691939 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.697092 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.707476 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845633 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845715 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845749 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrvl\" (UniqueName: \"kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845855 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845889 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.845921 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.846000 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947408 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947786 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947826 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947846 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrvl\" (UniqueName: \"kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947936 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947960 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.947983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.948102 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.949058 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.949446 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.951920 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.952661 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.953079 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.957666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.971175 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrvl\" (UniqueName: \"kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:15 crc kubenswrapper[4813]: I0129 16:57:15.994881 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " pod="openstack/glance-default-internal-api-0" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.010285 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.203387 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.246211 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.246535 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.287990 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d606996d-c2f9-4072-b026-d7594399dd75" path="/var/lib/kubelet/pods/d606996d-c2f9-4072-b026-d7594399dd75/volumes" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.301147 4813 generic.go:334] "Generic (PLEG): container finished" podID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerID="097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c" exitCode=0 Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.301607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" event={"ID":"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7","Type":"ContainerDied","Data":"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c"} Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.301638 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.301658 4813 scope.go:117] "RemoveContainer" containerID="097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.301645 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d96cd6c9c-z54c4" event={"ID":"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7","Type":"ContainerDied","Data":"0708d2e1d97f96cdb02dcee8b1d912c5c5d62247e12685563b32ae264825c17b"} Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.342500 4813 generic.go:334] "Generic (PLEG): container finished" podID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerID="d1652c697171b4644384aad69873e83fc4dc25a9c4b63c4c79eced2b478dcb88" exitCode=0 Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.342566 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerDied","Data":"d1652c697171b4644384aad69873e83fc4dc25a9c4b63c4c79eced2b478dcb88"} Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rl5p\" (UniqueName: \"kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356851 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.356986 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc\") pod \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\" (UID: \"0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.385302 4813 scope.go:117] "RemoveContainer" containerID="bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.388657 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p" (OuterVolumeSpecName: "kube-api-access-7rl5p") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "kube-api-access-7rl5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.459823 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rl5p\" (UniqueName: \"kubernetes.io/projected/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-kube-api-access-7rl5p\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.526817 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.541448 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config" (OuterVolumeSpecName: "config") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.546185 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.556953 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.561585 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.561805 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.561899 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.561966 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.649144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" (UID: "0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.670654 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.748142 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.751423 4813 scope.go:117] "RemoveContainer" containerID="097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.769903 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c\": container with ID starting with 097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c not found: ID does not exist" containerID="097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.769965 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c"} err="failed to get container status \"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c\": rpc error: code = NotFound desc = could not find container \"097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c\": container with ID starting with 097d92504785e3c4f349150c2f9bccf7e77161aec9d7bf4056f394f855758f6c not found: ID does not exist" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.769998 4813 scope.go:117] "RemoveContainer" containerID="bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.770572 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4\": container with ID starting with bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4 not found: ID does not exist" containerID="bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.770610 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4"} err="failed to get container status \"bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4\": rpc error: code = NotFound desc = could not find container \"bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4\": container with ID starting with bbc4a32c18af734de3bcb5aa034467422e01a701a18df8e611859dc2636986b4 not found: ID does not exist" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.863709 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910064 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7g6dr"] Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.910577 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-httpd" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910598 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-httpd" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.910619 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="dnsmasq-dns" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910628 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="dnsmasq-dns" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.910640 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-log" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910648 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-log" Jan 29 16:57:16 crc kubenswrapper[4813]: E0129 16:57:16.910665 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="init" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="init" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910941 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-httpd" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910957 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" containerName="dnsmasq-dns" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.910976 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" containerName="glance-log" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.911757 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.914597 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nq2t4" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.914795 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.917699 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.973174 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7g6dr"] Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975547 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975588 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975742 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975909 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsww\" (UniqueName: \"kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.975978 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976031 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976084 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data\") pod \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\" (UID: \"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953\") " Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976346 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5hw\" (UniqueName: \"kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976413 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976465 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976493 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.976805 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.980013 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts" (OuterVolumeSpecName: "scripts") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.980120 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.982604 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs" (OuterVolumeSpecName: "logs") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:16 crc kubenswrapper[4813]: I0129 16:57:16.987237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww" (OuterVolumeSpecName: "kube-api-access-5wsww") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "kube-api-access-5wsww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.002307 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.017688 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d96cd6c9c-z54c4"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.021430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.043064 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data" (OuterVolumeSpecName: "config-data") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078498 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5hw\" (UniqueName: \"kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078626 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078703 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078733 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078832 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wsww\" (UniqueName: \"kubernetes.io/projected/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-kube-api-access-5wsww\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078846 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078856 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078866 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078874 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078892 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.078916 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.084042 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.086057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.094506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.099994 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5hw\" (UniqueName: \"kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw\") pod \"nova-cell0-conductor-db-sync-7g6dr\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.129391 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" (UID: "bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.139769 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.180599 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.180648 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.251791 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.360825 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerStarted","Data":"6b7ef72bcf0a50683fd1b6f1c4256b15e025ec99f59800ac0452007736b919c5"} Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.363707 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953","Type":"ContainerDied","Data":"b851be1454eb45fd4d94a3ec5bd0f4a55323ac51465567d072cf93521f6a0fd1"} Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.364061 4813 scope.go:117] "RemoveContainer" containerID="d1652c697171b4644384aad69873e83fc4dc25a9c4b63c4c79eced2b478dcb88" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.364187 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.388354 4813 generic.go:334] "Generic (PLEG): container finished" podID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerID="735355b6600c4f7c0b1b30c66795884cf754d37ea96a9723bad701667dcb8bf0" exitCode=0 Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.388433 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerDied","Data":"735355b6600c4f7c0b1b30c66795884cf754d37ea96a9723bad701667dcb8bf0"} Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.440257 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.455086 4813 scope.go:117] "RemoveContainer" containerID="410866b4da0487fd569b786c3474b9455190956c89b7bae91ae7d9c87b59034e" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.477385 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.493010 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.495034 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.497744 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.504597 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.505479 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587537 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587654 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587674 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpq5\" (UniqueName: \"kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587695 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587728 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587760 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.587796 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688597 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688694 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpq5\" (UniqueName: \"kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688718 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688759 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688800 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.688852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.689535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.689828 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.693638 4813 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.695870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.699097 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.701605 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.702620 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.729040 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.738802 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpq5\" (UniqueName: \"kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5\") pod \"glance-default-external-api-0\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " pod="openstack/glance-default-external-api-0" Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.829137 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7g6dr"] Jan 29 16:57:17 crc kubenswrapper[4813]: I0129 16:57:17.881402 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.087611 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202126 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202204 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202252 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202322 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202445 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.202479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts\") pod \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\" (UID: \"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0\") " Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.204423 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.204515 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.211079 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2" (OuterVolumeSpecName: "kube-api-access-2t7v2") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "kube-api-access-2t7v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.217859 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts" (OuterVolumeSpecName: "scripts") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.234784 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.282274 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7" path="/var/lib/kubelet/pods/0284f5f8-5ff3-42aa-bd39-ea5a24fc2fd7/volumes" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.283214 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953" path="/var/lib/kubelet/pods/bd0f2eb0-1d9e-42a5-b0bf-33880a3d4953/volumes" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.311030 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data" (OuterVolumeSpecName: "config-data") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316125 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316173 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316188 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316206 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316249 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t7v2\" (UniqueName: \"kubernetes.io/projected/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-kube-api-access-2t7v2\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.316269 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.329006 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" (UID: "a8180da1-49c9-4fe1-9ac5-cdcec262a8d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.428244 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.435290 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8180da1-49c9-4fe1-9ac5-cdcec262a8d0","Type":"ContainerDied","Data":"a1cb56da282b883ff011248fe9e95b18c4f85d9a54f0cc31a5831876b8fb8a76"} Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.435360 4813 scope.go:117] "RemoveContainer" containerID="419addf8170ea70f96edf5cd043ba5f5fe27b8a54918d27ff07058b47ac2bad4" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.435577 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.443149 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerStarted","Data":"92a3054a95c28ed9a7e7347b5d0b1b2fbb5102f2edf17c6d56d65e646701fc21"} Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.449020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" event={"ID":"c893ab38-542a-4b23-b2c7-cc6a1a8281a5","Type":"ContainerStarted","Data":"4e9b41b1cf63b88b197b2f80487c2ff65f34e839c7a62ba863f09168a7d9356d"} Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.505832 4813 scope.go:117] "RemoveContainer" containerID="735355b6600c4f7c0b1b30c66795884cf754d37ea96a9723bad701667dcb8bf0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.552079 4813 scope.go:117] "RemoveContainer" containerID="71a3d3410b9d283f714cb42ce5149b7699c33fa384e7e979960158a6af65405e" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.582413 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.594032 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.609314 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:18 crc kubenswrapper[4813]: E0129 16:57:18.609748 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="sg-core" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.609762 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="sg-core" Jan 29 16:57:18 crc kubenswrapper[4813]: E0129 16:57:18.609793 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-notification-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.609811 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-notification-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: E0129 16:57:18.609827 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-central-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.609834 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-central-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.610015 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-notification-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.610031 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="ceilometer-central-agent" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.610047 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" containerName="sg-core" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.611760 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.614335 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.615188 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633613 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2dq\" (UniqueName: \"kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633694 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633734 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633784 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633803 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633867 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.633915 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.656972 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.670562 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734765 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734840 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734879 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734901 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734955 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.734993 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.735028 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df2dq\" (UniqueName: \"kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.742554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.742861 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.748389 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.750298 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.750750 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.750771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.765629 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df2dq\" (UniqueName: \"kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq\") pod \"ceilometer-0\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " pod="openstack/ceilometer-0" Jan 29 16:57:18 crc kubenswrapper[4813]: I0129 16:57:18.980541 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:19 crc kubenswrapper[4813]: I0129 16:57:19.493516 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerStarted","Data":"8abb7a4ee4a3b9fc231ace5a03ef91398ee318f728d8debd57c6d16f0aae7ba3"} Jan 29 16:57:19 crc kubenswrapper[4813]: I0129 16:57:19.508819 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerStarted","Data":"a6a93a4957fda92aefa746e5ed86fbe72b0597c59cfa05a597c47a63d8c4434a"} Jan 29 16:57:19 crc kubenswrapper[4813]: I0129 16:57:19.557642 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.557621777 podStartE2EDuration="4.557621777s" podCreationTimestamp="2026-01-29 16:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:19.524710536 +0000 UTC m=+1692.011913762" watchObservedRunningTime="2026-01-29 16:57:19.557621777 +0000 UTC m=+1692.044824993" Jan 29 16:57:19 crc kubenswrapper[4813]: I0129 16:57:19.858220 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:20 crc kubenswrapper[4813]: I0129 16:57:20.264344 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8180da1-49c9-4fe1-9ac5-cdcec262a8d0" path="/var/lib/kubelet/pods/a8180da1-49c9-4fe1-9ac5-cdcec262a8d0/volumes" Jan 29 16:57:20 crc kubenswrapper[4813]: I0129 16:57:20.549713 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerStarted","Data":"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04"} Jan 29 16:57:20 crc kubenswrapper[4813]: I0129 16:57:20.550049 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerStarted","Data":"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b"} Jan 29 16:57:20 crc kubenswrapper[4813]: I0129 16:57:20.556797 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerStarted","Data":"2f638699009da5393d483bf3235a9ca3a7ac12e3ea3d53f1d8ff52a7445446cf"} Jan 29 16:57:21 crc kubenswrapper[4813]: I0129 16:57:21.574597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerStarted","Data":"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8"} Jan 29 16:57:22 crc kubenswrapper[4813]: I0129 16:57:22.587633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerStarted","Data":"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492"} Jan 29 16:57:23 crc kubenswrapper[4813]: E0129 16:57:23.095969 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:23 crc kubenswrapper[4813]: E0129 16:57:23.096502 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df2dq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(374235a1-63fe-41a5-b4fb-54e3a5c8007a): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:23 crc kubenswrapper[4813]: E0129 16:57:23.097888 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" Jan 29 16:57:23 crc kubenswrapper[4813]: I0129 16:57:23.620476 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerStarted","Data":"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a"} Jan 29 16:57:23 crc kubenswrapper[4813]: E0129 16:57:23.623435 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" Jan 29 16:57:23 crc kubenswrapper[4813]: I0129 16:57:23.660818 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.660795195 podStartE2EDuration="6.660795195s" podCreationTimestamp="2026-01-29 16:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:20.584074587 +0000 UTC m=+1693.071277823" watchObservedRunningTime="2026-01-29 16:57:23.660795195 +0000 UTC m=+1696.147998411" Jan 29 16:57:24 crc kubenswrapper[4813]: E0129 16:57:24.631593 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" Jan 29 16:57:24 crc kubenswrapper[4813]: I0129 16:57:24.648021 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:25 crc kubenswrapper[4813]: I0129 16:57:25.638328 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-central-agent" containerID="cri-o://fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8" gracePeriod=30 Jan 29 16:57:25 crc kubenswrapper[4813]: I0129 16:57:25.638403 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-notification-agent" containerID="cri-o://168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492" gracePeriod=30 Jan 29 16:57:25 crc kubenswrapper[4813]: I0129 16:57:25.638400 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="sg-core" containerID="cri-o://9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a" gracePeriod=30 Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.011784 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.011839 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.044696 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.100036 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.668743 4813 generic.go:334] "Generic (PLEG): container finished" podID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerID="9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a" exitCode=2 Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.669053 4813 generic.go:334] "Generic (PLEG): container finished" podID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerID="168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492" exitCode=0 Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.669359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerDied","Data":"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a"} Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.669392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerDied","Data":"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492"} Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.669949 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:26 crc kubenswrapper[4813]: I0129 16:57:26.669980 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:27 crc kubenswrapper[4813]: I0129 16:57:27.881899 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:57:27 crc kubenswrapper[4813]: I0129 16:57:27.881955 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 16:57:27 crc kubenswrapper[4813]: I0129 16:57:27.921985 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:57:27 crc kubenswrapper[4813]: I0129 16:57:27.936633 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.283381 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:57:28 crc kubenswrapper[4813]: E0129 16:57:28.283678 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.686508 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.686567 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.889809 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.890229 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:57:28 crc kubenswrapper[4813]: I0129 16:57:28.904685 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 16:57:30 crc kubenswrapper[4813]: I0129 16:57:30.951452 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:57:30 crc kubenswrapper[4813]: I0129 16:57:30.951857 4813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:57:31 crc kubenswrapper[4813]: I0129 16:57:31.106503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.461265 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.572040 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.572172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.572785 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.572957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df2dq\" (UniqueName: \"kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.572982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.573054 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.573166 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml\") pod \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\" (UID: \"374235a1-63fe-41a5-b4fb-54e3a5c8007a\") " Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.575359 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.575552 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.582364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq" (OuterVolumeSpecName: "kube-api-access-df2dq") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "kube-api-access-df2dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.582475 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts" (OuterVolumeSpecName: "scripts") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.609008 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.640009 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.661209 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data" (OuterVolumeSpecName: "config-data") pod "374235a1-63fe-41a5-b4fb-54e3a5c8007a" (UID: "374235a1-63fe-41a5-b4fb-54e3a5c8007a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.675970 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df2dq\" (UniqueName: \"kubernetes.io/projected/374235a1-63fe-41a5-b4fb-54e3a5c8007a-kube-api-access-df2dq\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676026 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676046 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/374235a1-63fe-41a5-b4fb-54e3a5c8007a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676061 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676076 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676090 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.676104 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374235a1-63fe-41a5-b4fb-54e3a5c8007a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.727221 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" event={"ID":"c893ab38-542a-4b23-b2c7-cc6a1a8281a5","Type":"ContainerStarted","Data":"2a119d6cd3391339c60a7d61a292d8fdc6f031b8d75829ab322a43fe2c01d8cf"} Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.734187 4813 generic.go:334] "Generic (PLEG): container finished" podID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerID="fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8" exitCode=0 Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.734250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerDied","Data":"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8"} Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.734267 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.734303 4813 scope.go:117] "RemoveContainer" containerID="9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.734289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"374235a1-63fe-41a5-b4fb-54e3a5c8007a","Type":"ContainerDied","Data":"2f638699009da5393d483bf3235a9ca3a7ac12e3ea3d53f1d8ff52a7445446cf"} Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.774215 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" podStartSLOduration=2.6802185720000002 podStartE2EDuration="16.774173022s" podCreationTimestamp="2026-01-29 16:57:16 +0000 UTC" firstStartedPulling="2026-01-29 16:57:17.828181408 +0000 UTC m=+1690.315384624" lastFinishedPulling="2026-01-29 16:57:31.922135858 +0000 UTC m=+1704.409339074" observedRunningTime="2026-01-29 16:57:32.756054254 +0000 UTC m=+1705.243257470" watchObservedRunningTime="2026-01-29 16:57:32.774173022 +0000 UTC m=+1705.261376238" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.780195 4813 scope.go:117] "RemoveContainer" containerID="168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.836516 4813 scope.go:117] "RemoveContainer" containerID="fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.850052 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.881152 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.886824 4813 scope.go:117] "RemoveContainer" containerID="9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a" Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.887402 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a\": container with ID starting with 9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a not found: ID does not exist" containerID="9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.887447 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a"} err="failed to get container status \"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a\": rpc error: code = NotFound desc = could not find container \"9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a\": container with ID starting with 9c3326327b8ff78cb035a063b46278dccbf63d2d61dc3a20f84dd9fa31c2ea1a not found: ID does not exist" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.887470 4813 scope.go:117] "RemoveContainer" containerID="168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492" Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.888238 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492\": container with ID starting with 168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492 not found: ID does not exist" containerID="168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.888305 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492"} err="failed to get container status \"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492\": rpc error: code = NotFound desc = could not find container \"168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492\": container with ID starting with 168e811e231386db8197a215fb793d0f3c84de5adeacb5a5ed97a0fe51a99492 not found: ID does not exist" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.888345 4813 scope.go:117] "RemoveContainer" containerID="fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8" Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.888915 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8\": container with ID starting with fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8 not found: ID does not exist" containerID="fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.888941 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8"} err="failed to get container status \"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8\": rpc error: code = NotFound desc = could not find container \"fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8\": container with ID starting with fb1062a722e6ccbf480cc1f3b6fe3b5c133deccde9871c79d00c58dd4459cae8 not found: ID does not exist" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898034 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.898594 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-central-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898613 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-central-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.898631 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-notification-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898638 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-notification-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: E0129 16:57:32.898647 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="sg-core" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898655 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="sg-core" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898845 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-central-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898870 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="sg-core" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.898891 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" containerName="ceilometer-notification-agent" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.900670 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.904354 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.904529 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.912550 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.983972 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8rxh\" (UniqueName: \"kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984082 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984139 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984409 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:32 crc kubenswrapper[4813]: I0129 16:57:32.984577 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086182 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8rxh\" (UniqueName: \"kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086336 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086403 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086486 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086534 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.086553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.087591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.087765 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.091870 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.093054 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.094232 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.101405 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.109349 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8rxh\" (UniqueName: \"kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh\") pod \"ceilometer-0\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.224103 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:33 crc kubenswrapper[4813]: W0129 16:57:33.733675 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa03a455_bc61_4a23_a690_df87aaef9715.slice/crio-b398f737a7d02e7fd7ca093af4792e04aee110ccadc0412f7e4f7239ea9d53ab WatchSource:0}: Error finding container b398f737a7d02e7fd7ca093af4792e04aee110ccadc0412f7e4f7239ea9d53ab: Status 404 returned error can't find the container with id b398f737a7d02e7fd7ca093af4792e04aee110ccadc0412f7e4f7239ea9d53ab Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.745912 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:33 crc kubenswrapper[4813]: I0129 16:57:33.771864 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerStarted","Data":"b398f737a7d02e7fd7ca093af4792e04aee110ccadc0412f7e4f7239ea9d53ab"} Jan 29 16:57:34 crc kubenswrapper[4813]: I0129 16:57:34.253583 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374235a1-63fe-41a5-b4fb-54e3a5c8007a" path="/var/lib/kubelet/pods/374235a1-63fe-41a5-b4fb-54e3a5c8007a/volumes" Jan 29 16:57:34 crc kubenswrapper[4813]: I0129 16:57:34.790906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerStarted","Data":"c8aab5dc2fc046f37559c21a5e1172290c5f1bdca6365a2ee7410cd3cb9b0741"} Jan 29 16:57:35 crc kubenswrapper[4813]: I0129 16:57:35.722689 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:35 crc kubenswrapper[4813]: I0129 16:57:35.802979 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerStarted","Data":"ed9e40f63fa3469fca937376a1fb3465ef607517dcaf33084a9cbfe77b8a9d73"} Jan 29 16:57:36 crc kubenswrapper[4813]: I0129 16:57:36.813732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerStarted","Data":"4b40023f71bd1969a82c8e520bd6c825c829ad3ed2c220c67aa98db08663982d"} Jan 29 16:57:39 crc kubenswrapper[4813]: I0129 16:57:39.079697 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:39 crc kubenswrapper[4813]: I0129 16:57:39.699140 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-886657b65-p552j" Jan 29 16:57:39 crc kubenswrapper[4813]: I0129 16:57:39.805018 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:39 crc kubenswrapper[4813]: I0129 16:57:39.808023 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798d457854-wxskz" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-api" containerID="cri-o://1e5b623683f6cb7ba9c5493b668c94cddb77f58344b1409daac03e8d2ad6df61" gracePeriod=30 Jan 29 16:57:39 crc kubenswrapper[4813]: I0129 16:57:39.808098 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-798d457854-wxskz" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-httpd" containerID="cri-o://0e267593dfe5d86281e238fd74b2f732badf00023a2da0789f38bd552593e52d" gracePeriod=30 Jan 29 16:57:40 crc kubenswrapper[4813]: I0129 16:57:40.240235 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:57:40 crc kubenswrapper[4813]: E0129 16:57:40.240834 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:57:40 crc kubenswrapper[4813]: I0129 16:57:40.864448 4813 generic.go:334] "Generic (PLEG): container finished" podID="17e6b715-d8fb-496e-a457-0730e02646f3" containerID="0e267593dfe5d86281e238fd74b2f732badf00023a2da0789f38bd552593e52d" exitCode=0 Jan 29 16:57:40 crc kubenswrapper[4813]: I0129 16:57:40.864495 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerDied","Data":"0e267593dfe5d86281e238fd74b2f732badf00023a2da0789f38bd552593e52d"} Jan 29 16:57:43 crc kubenswrapper[4813]: I0129 16:57:43.891646 4813 generic.go:334] "Generic (PLEG): container finished" podID="17e6b715-d8fb-496e-a457-0730e02646f3" containerID="1e5b623683f6cb7ba9c5493b668c94cddb77f58344b1409daac03e8d2ad6df61" exitCode=0 Jan 29 16:57:43 crc kubenswrapper[4813]: I0129 16:57:43.891732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerDied","Data":"1e5b623683f6cb7ba9c5493b668c94cddb77f58344b1409daac03e8d2ad6df61"} Jan 29 16:57:46 crc kubenswrapper[4813]: I0129 16:57:46.926063 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d457854-wxskz" event={"ID":"17e6b715-d8fb-496e-a457-0730e02646f3","Type":"ContainerDied","Data":"d4443c88983298bdede1db56fc1ad3cb7d06028cf347c58fca9754986e35af23"} Jan 29 16:57:46 crc kubenswrapper[4813]: I0129 16:57:46.926653 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4443c88983298bdede1db56fc1ad3cb7d06028cf347c58fca9754986e35af23" Jan 29 16:57:46 crc kubenswrapper[4813]: I0129 16:57:46.929203 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.064580 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs\") pod \"17e6b715-d8fb-496e-a457-0730e02646f3\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.064674 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config\") pod \"17e6b715-d8fb-496e-a457-0730e02646f3\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.064726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlt92\" (UniqueName: \"kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92\") pod \"17e6b715-d8fb-496e-a457-0730e02646f3\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.064820 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config\") pod \"17e6b715-d8fb-496e-a457-0730e02646f3\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.064840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle\") pod \"17e6b715-d8fb-496e-a457-0730e02646f3\" (UID: \"17e6b715-d8fb-496e-a457-0730e02646f3\") " Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.068911 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "17e6b715-d8fb-496e-a457-0730e02646f3" (UID: "17e6b715-d8fb-496e-a457-0730e02646f3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.069073 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92" (OuterVolumeSpecName: "kube-api-access-jlt92") pod "17e6b715-d8fb-496e-a457-0730e02646f3" (UID: "17e6b715-d8fb-496e-a457-0730e02646f3"). InnerVolumeSpecName "kube-api-access-jlt92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.114673 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config" (OuterVolumeSpecName: "config") pod "17e6b715-d8fb-496e-a457-0730e02646f3" (UID: "17e6b715-d8fb-496e-a457-0730e02646f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.125098 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e6b715-d8fb-496e-a457-0730e02646f3" (UID: "17e6b715-d8fb-496e-a457-0730e02646f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.148428 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "17e6b715-d8fb-496e-a457-0730e02646f3" (UID: "17e6b715-d8fb-496e-a457-0730e02646f3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.167181 4813 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.167251 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.167267 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlt92\" (UniqueName: \"kubernetes.io/projected/17e6b715-d8fb-496e-a457-0730e02646f3-kube-api-access-jlt92\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.167282 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.167294 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e6b715-d8fb-496e-a457-0730e02646f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.936711 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d457854-wxskz" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.937235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerStarted","Data":"35a7d7d496b67bb3833f600235743f8ec08cb4ca6eeb7f13ac9dad528893739f"} Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.936786 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-central-agent" containerID="cri-o://c8aab5dc2fc046f37559c21a5e1172290c5f1bdca6365a2ee7410cd3cb9b0741" gracePeriod=30 Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.937355 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="proxy-httpd" containerID="cri-o://35a7d7d496b67bb3833f600235743f8ec08cb4ca6eeb7f13ac9dad528893739f" gracePeriod=30 Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.937431 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-notification-agent" containerID="cri-o://ed9e40f63fa3469fca937376a1fb3465ef607517dcaf33084a9cbfe77b8a9d73" gracePeriod=30 Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.937462 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="sg-core" containerID="cri-o://4b40023f71bd1969a82c8e520bd6c825c829ad3ed2c220c67aa98db08663982d" gracePeriod=30 Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.937598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.961670 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.74796952 podStartE2EDuration="15.961651583s" podCreationTimestamp="2026-01-29 16:57:32 +0000 UTC" firstStartedPulling="2026-01-29 16:57:33.736626651 +0000 UTC m=+1706.223829867" lastFinishedPulling="2026-01-29 16:57:46.950308714 +0000 UTC m=+1719.437511930" observedRunningTime="2026-01-29 16:57:47.961599071 +0000 UTC m=+1720.448802277" watchObservedRunningTime="2026-01-29 16:57:47.961651583 +0000 UTC m=+1720.448854799" Jan 29 16:57:47 crc kubenswrapper[4813]: I0129 16:57:47.991686 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.000090 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-798d457854-wxskz"] Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.252274 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" path="/var/lib/kubelet/pods/17e6b715-d8fb-496e-a457-0730e02646f3/volumes" Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948217 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa03a455-bc61-4a23-a690-df87aaef9715" containerID="35a7d7d496b67bb3833f600235743f8ec08cb4ca6eeb7f13ac9dad528893739f" exitCode=0 Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948513 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa03a455-bc61-4a23-a690-df87aaef9715" containerID="4b40023f71bd1969a82c8e520bd6c825c829ad3ed2c220c67aa98db08663982d" exitCode=2 Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948521 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa03a455-bc61-4a23-a690-df87aaef9715" containerID="c8aab5dc2fc046f37559c21a5e1172290c5f1bdca6365a2ee7410cd3cb9b0741" exitCode=0 Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948289 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerDied","Data":"35a7d7d496b67bb3833f600235743f8ec08cb4ca6eeb7f13ac9dad528893739f"} Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerDied","Data":"4b40023f71bd1969a82c8e520bd6c825c829ad3ed2c220c67aa98db08663982d"} Jan 29 16:57:48 crc kubenswrapper[4813]: I0129 16:57:48.948570 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerDied","Data":"c8aab5dc2fc046f37559c21a5e1172290c5f1bdca6365a2ee7410cd3cb9b0741"} Jan 29 16:57:50 crc kubenswrapper[4813]: I0129 16:57:50.967216 4813 generic.go:334] "Generic (PLEG): container finished" podID="c893ab38-542a-4b23-b2c7-cc6a1a8281a5" containerID="2a119d6cd3391339c60a7d61a292d8fdc6f031b8d75829ab322a43fe2c01d8cf" exitCode=0 Jan 29 16:57:50 crc kubenswrapper[4813]: I0129 16:57:50.967294 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" event={"ID":"c893ab38-542a-4b23-b2c7-cc6a1a8281a5","Type":"ContainerDied","Data":"2a119d6cd3391339c60a7d61a292d8fdc6f031b8d75829ab322a43fe2c01d8cf"} Jan 29 16:57:51 crc kubenswrapper[4813]: I0129 16:57:51.239753 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:57:51 crc kubenswrapper[4813]: E0129 16:57:51.240055 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:57:51 crc kubenswrapper[4813]: I0129 16:57:51.989198 4813 generic.go:334] "Generic (PLEG): container finished" podID="fa03a455-bc61-4a23-a690-df87aaef9715" containerID="ed9e40f63fa3469fca937376a1fb3465ef607517dcaf33084a9cbfe77b8a9d73" exitCode=0 Jan 29 16:57:51 crc kubenswrapper[4813]: I0129 16:57:51.989258 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerDied","Data":"ed9e40f63fa3469fca937376a1fb3465ef607517dcaf33084a9cbfe77b8a9d73"} Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.129175 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8rxh\" (UniqueName: \"kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258205 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258308 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258355 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258380 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.258480 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle\") pod \"fa03a455-bc61-4a23-a690-df87aaef9715\" (UID: \"fa03a455-bc61-4a23-a690-df87aaef9715\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.259219 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.259347 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.266357 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts" (OuterVolumeSpecName: "scripts") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.274946 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh" (OuterVolumeSpecName: "kube-api-access-w8rxh") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "kube-api-access-w8rxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.285999 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.296384 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.352903 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360607 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360646 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360666 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8rxh\" (UniqueName: \"kubernetes.io/projected/fa03a455-bc61-4a23-a690-df87aaef9715-kube-api-access-w8rxh\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360677 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360687 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.360696 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa03a455-bc61-4a23-a690-df87aaef9715-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.362309 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data" (OuterVolumeSpecName: "config-data") pod "fa03a455-bc61-4a23-a690-df87aaef9715" (UID: "fa03a455-bc61-4a23-a690-df87aaef9715"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.461629 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts\") pod \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.461715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle\") pod \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.461837 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data\") pod \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.461858 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m5hw\" (UniqueName: \"kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw\") pod \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\" (UID: \"c893ab38-542a-4b23-b2c7-cc6a1a8281a5\") " Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.462209 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa03a455-bc61-4a23-a690-df87aaef9715-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.465099 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts" (OuterVolumeSpecName: "scripts") pod "c893ab38-542a-4b23-b2c7-cc6a1a8281a5" (UID: "c893ab38-542a-4b23-b2c7-cc6a1a8281a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.465490 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw" (OuterVolumeSpecName: "kube-api-access-5m5hw") pod "c893ab38-542a-4b23-b2c7-cc6a1a8281a5" (UID: "c893ab38-542a-4b23-b2c7-cc6a1a8281a5"). InnerVolumeSpecName "kube-api-access-5m5hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.484965 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data" (OuterVolumeSpecName: "config-data") pod "c893ab38-542a-4b23-b2c7-cc6a1a8281a5" (UID: "c893ab38-542a-4b23-b2c7-cc6a1a8281a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.488734 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c893ab38-542a-4b23-b2c7-cc6a1a8281a5" (UID: "c893ab38-542a-4b23-b2c7-cc6a1a8281a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.563522 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.563566 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.563576 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:52 crc kubenswrapper[4813]: I0129 16:57:52.563586 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m5hw\" (UniqueName: \"kubernetes.io/projected/c893ab38-542a-4b23-b2c7-cc6a1a8281a5-kube-api-access-5m5hw\") on node \"crc\" DevicePath \"\"" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.002240 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa03a455-bc61-4a23-a690-df87aaef9715","Type":"ContainerDied","Data":"b398f737a7d02e7fd7ca093af4792e04aee110ccadc0412f7e4f7239ea9d53ab"} Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.002586 4813 scope.go:117] "RemoveContainer" containerID="35a7d7d496b67bb3833f600235743f8ec08cb4ca6eeb7f13ac9dad528893739f" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.002490 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.006176 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" event={"ID":"c893ab38-542a-4b23-b2c7-cc6a1a8281a5","Type":"ContainerDied","Data":"4e9b41b1cf63b88b197b2f80487c2ff65f34e839c7a62ba863f09168a7d9356d"} Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.006240 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e9b41b1cf63b88b197b2f80487c2ff65f34e839c7a62ba863f09168a7d9356d" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.006257 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7g6dr" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.033007 4813 scope.go:117] "RemoveContainer" containerID="4b40023f71bd1969a82c8e520bd6c825c829ad3ed2c220c67aa98db08663982d" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.055651 4813 scope.go:117] "RemoveContainer" containerID="ed9e40f63fa3469fca937376a1fb3465ef607517dcaf33084a9cbfe77b8a9d73" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.056456 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.072908 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.079386 4813 scope.go:117] "RemoveContainer" containerID="c8aab5dc2fc046f37559c21a5e1172290c5f1bdca6365a2ee7410cd3cb9b0741" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.086913 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087359 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-api" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087377 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-api" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087391 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-central-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087397 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-central-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087415 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="sg-core" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087421 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="sg-core" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087436 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-notification-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087442 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-notification-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087452 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="proxy-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087459 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="proxy-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087470 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c893ab38-542a-4b23-b2c7-cc6a1a8281a5" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087476 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c893ab38-542a-4b23-b2c7-cc6a1a8281a5" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:53 crc kubenswrapper[4813]: E0129 16:57:53.087487 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.087492 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088207 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-central-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088248 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="proxy-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088263 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="ceilometer-notification-agent" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088277 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c893ab38-542a-4b23-b2c7-cc6a1a8281a5" containerName="nova-cell0-conductor-db-sync" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088298 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-api" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088306 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e6b715-d8fb-496e-a457-0730e02646f3" containerName="neutron-httpd" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.088317 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" containerName="sg-core" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.089882 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.092646 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.101341 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.103507 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.134776 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.136173 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.142757 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.143000 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nq2t4" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.164245 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276474 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276526 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swzb7\" (UniqueName: \"kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276672 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276716 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276848 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276879 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.276914 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ldcp\" (UniqueName: \"kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.277019 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.277072 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.277355 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379239 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379306 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379340 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ldcp\" (UniqueName: \"kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379410 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379456 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379497 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379588 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379616 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swzb7\" (UniqueName: \"kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379642 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.379664 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.380836 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.380960 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.385167 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.385342 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.385407 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.386224 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.390963 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.392774 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.403591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swzb7\" (UniqueName: \"kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7\") pod \"nova-cell0-conductor-0\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.414070 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ldcp\" (UniqueName: \"kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp\") pod \"ceilometer-0\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.419891 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.454142 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:53 crc kubenswrapper[4813]: I0129 16:57:53.966420 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:57:54 crc kubenswrapper[4813]: I0129 16:57:54.024487 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerStarted","Data":"4525d2a03c1f2cb573a66fafe3d3580f198c72c49b592d16c58ed015a3a634f0"} Jan 29 16:57:54 crc kubenswrapper[4813]: I0129 16:57:54.038979 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:57:54 crc kubenswrapper[4813]: I0129 16:57:54.252250 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa03a455-bc61-4a23-a690-df87aaef9715" path="/var/lib/kubelet/pods/fa03a455-bc61-4a23-a690-df87aaef9715/volumes" Jan 29 16:57:55 crc kubenswrapper[4813]: I0129 16:57:55.040686 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51bbe82b-cffe-4d8b-ac7d-55507916528e","Type":"ContainerStarted","Data":"38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8"} Jan 29 16:57:55 crc kubenswrapper[4813]: I0129 16:57:55.040970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51bbe82b-cffe-4d8b-ac7d-55507916528e","Type":"ContainerStarted","Data":"992e49d9cf399d77f2b8f65e990c0a0a20edd2b805f7467e27d13c8c46164946"} Jan 29 16:57:55 crc kubenswrapper[4813]: I0129 16:57:55.040993 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 16:57:55 crc kubenswrapper[4813]: I0129 16:57:55.042807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerStarted","Data":"490ac85d81cfea520891331a54733150eb728d892ebcb776a1523b259e55b64a"} Jan 29 16:57:55 crc kubenswrapper[4813]: I0129 16:57:55.065866 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.065842995 podStartE2EDuration="2.065842995s" podCreationTimestamp="2026-01-29 16:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:57:55.057808755 +0000 UTC m=+1727.545011971" watchObservedRunningTime="2026-01-29 16:57:55.065842995 +0000 UTC m=+1727.553046211" Jan 29 16:57:56 crc kubenswrapper[4813]: I0129 16:57:56.079497 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerStarted","Data":"50016b2252af57f08f2ad8a67b5fece184b8025f38be1fa6b56c75a51813814b"} Jan 29 16:57:56 crc kubenswrapper[4813]: E0129 16:57:56.376313 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:57:56 crc kubenswrapper[4813]: E0129 16:57:56.376697 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ldcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abf59a63-bdd8-40d8-a5e0-69e26b57fcba): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:57:56 crc kubenswrapper[4813]: E0129 16:57:56.378125 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:57:57 crc kubenswrapper[4813]: I0129 16:57:57.090970 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerStarted","Data":"d52e5f1aaf5199dbe36f48502f5e4a65ec34d06d868d7fc7bff13b62d47b5dc1"} Jan 29 16:57:57 crc kubenswrapper[4813]: E0129 16:57:57.092701 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:57:58 crc kubenswrapper[4813]: E0129 16:57:58.101371 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.482295 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.968156 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-z9f8j"] Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.970405 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.972130 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.973164 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 16:58:03 crc kubenswrapper[4813]: I0129 16:58:03.978793 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z9f8j"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.069196 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhn2\" (UniqueName: \"kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.069246 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.069308 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.069411 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.150919 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.152568 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.157324 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.159701 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.170630 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.170668 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.170709 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.170811 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.170821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhn2\" (UniqueName: \"kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.173202 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.176950 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.180817 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.188187 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.193223 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.205655 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhn2\" (UniqueName: \"kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2\") pod \"nova-cell0-cell-mapping-z9f8j\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.229048 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.276772 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.276837 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2fl\" (UniqueName: \"kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.276922 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.277022 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.277085 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.277152 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.277176 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b46dq\" (UniqueName: \"kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.317666 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.319589 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.341709 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.345908 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.358230 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.359817 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.364773 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378660 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blb8\" (UniqueName: \"kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378706 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378772 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2fl\" (UniqueName: \"kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378807 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378827 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378870 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378916 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378930 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378952 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378976 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.378997 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b46dq\" (UniqueName: \"kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.380133 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.397160 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.398506 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.398988 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.402102 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.427998 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b46dq\" (UniqueName: \"kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq\") pod \"nova-cell1-novncproxy-0\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.428630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.429413 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2fl\" (UniqueName: \"kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl\") pod \"nova-metadata-0\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.470814 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483001 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483054 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483177 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9blb8\" (UniqueName: \"kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hkc\" (UniqueName: \"kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483413 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483499 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483564 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483621 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.483672 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.484617 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.485610 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.487999 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.488198 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.489607 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.506204 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9blb8\" (UniqueName: \"kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8\") pod \"dnsmasq-dns-647df7b8c5-rrqsr\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.531549 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.532877 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.534784 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.565840 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587521 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np69v\" (UniqueName: \"kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587617 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587643 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587677 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hkc\" (UniqueName: \"kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587711 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587817 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.587844 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.590256 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.593904 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.596515 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.616762 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hkc\" (UniqueName: \"kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc\") pod \"nova-api-0\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.619867 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.675130 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.689890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.690424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.690645 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np69v\" (UniqueName: \"kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.694850 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.695306 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.709130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np69v\" (UniqueName: \"kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v\") pod \"nova-scheduler-0\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.804338 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.873266 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:04 crc kubenswrapper[4813]: I0129 16:58:04.972450 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z9f8j"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.035939 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cnf4h"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.038750 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.045477 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.045573 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.073433 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cnf4h"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.098893 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.099277 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xfn\" (UniqueName: \"kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.099329 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.099376 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.119842 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:05 crc kubenswrapper[4813]: W0129 16:58:05.119876 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd20e0a1_0753_4b66_8141_2a90c60e69b4.slice/crio-461fdaa6f16b3f6646c66819f1894607e3cd845c61f6b06b238ba6f13711b4fa WatchSource:0}: Error finding container 461fdaa6f16b3f6646c66819f1894607e3cd845c61f6b06b238ba6f13711b4fa: Status 404 returned error can't find the container with id 461fdaa6f16b3f6646c66819f1894607e3cd845c61f6b06b238ba6f13711b4fa Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.168162 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd20e0a1-0753-4b66-8141-2a90c60e69b4","Type":"ContainerStarted","Data":"461fdaa6f16b3f6646c66819f1894607e3cd845c61f6b06b238ba6f13711b4fa"} Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.169694 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z9f8j" event={"ID":"4d1abced-dba3-4096-b7b5-f9f17fe32d90","Type":"ContainerStarted","Data":"10478f64da8aed51aeec7dbad5b0ebb5d2e6f9fb744035a80e958e5fc43b742a"} Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.201139 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xfn\" (UniqueName: \"kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.201215 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.201262 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.201351 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.209451 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.211707 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.214990 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.223749 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xfn\" (UniqueName: \"kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn\") pod \"nova-cell1-conductor-db-sync-cnf4h\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.355411 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.364715 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.367408 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.564357 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.579345 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:05 crc kubenswrapper[4813]: W0129 16:58:05.585127 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15df58c_df33_4aee_943d_411afa3be3b7.slice/crio-0e8a69065b8fd5ef6c0c1346b57999dc8c2fc87ed0fcc83835b9765b08c490a1 WatchSource:0}: Error finding container 0e8a69065b8fd5ef6c0c1346b57999dc8c2fc87ed0fcc83835b9765b08c490a1: Status 404 returned error can't find the container with id 0e8a69065b8fd5ef6c0c1346b57999dc8c2fc87ed0fcc83835b9765b08c490a1 Jan 29 16:58:05 crc kubenswrapper[4813]: I0129 16:58:05.889300 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cnf4h"] Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.177761 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerStarted","Data":"068031a060c5d177f0e4490f656778674eff456e7b284dbb8701aa54dfc08883"} Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.178581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerStarted","Data":"0e8a69065b8fd5ef6c0c1346b57999dc8c2fc87ed0fcc83835b9765b08c490a1"} Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.179466 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerStarted","Data":"948475c9cf1b87f9f24c66dad9b93a1ddfd7f699059bf677c1521f6650a2ee1f"} Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.180299 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" event={"ID":"1cfb8782-48f0-49b0-a2f8-378b60f304c7","Type":"ContainerStarted","Data":"0cb810a5574b603c7906038421a991f5cc0f3811f013a3cdbe4ccbd81b0a15f4"} Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.181174 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9cfd76c-dc2f-496a-b9d0-307091c5c709","Type":"ContainerStarted","Data":"3e7a5cf99e3e164a63b4adc7639d471479f16ab16d0770c1700e1edd72eb59ff"} Jan 29 16:58:06 crc kubenswrapper[4813]: I0129 16:58:06.243132 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:58:06 crc kubenswrapper[4813]: E0129 16:58:06.243353 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:58:07 crc kubenswrapper[4813]: I0129 16:58:07.193510 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerStarted","Data":"de18c389e58c6fd95b2366b1c582da0c60264c8ca7eea83dcbc2cc20f6476467"} Jan 29 16:58:07 crc kubenswrapper[4813]: I0129 16:58:07.942223 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:07 crc kubenswrapper[4813]: I0129 16:58:07.953048 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:08 crc kubenswrapper[4813]: I0129 16:58:08.204259 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z9f8j" event={"ID":"4d1abced-dba3-4096-b7b5-f9f17fe32d90","Type":"ContainerStarted","Data":"a661203ef1d1c5865964e620ae1da6ba70aeb836bd04db252fdf5da8043001c2"} Jan 29 16:58:09 crc kubenswrapper[4813]: I0129 16:58:09.213650 4813 generic.go:334] "Generic (PLEG): container finished" podID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerID="de18c389e58c6fd95b2366b1c582da0c60264c8ca7eea83dcbc2cc20f6476467" exitCode=0 Jan 29 16:58:09 crc kubenswrapper[4813]: I0129 16:58:09.213845 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerDied","Data":"de18c389e58c6fd95b2366b1c582da0c60264c8ca7eea83dcbc2cc20f6476467"} Jan 29 16:58:09 crc kubenswrapper[4813]: I0129 16:58:09.217061 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" event={"ID":"1cfb8782-48f0-49b0-a2f8-378b60f304c7","Type":"ContainerStarted","Data":"6d4a5389f11fece56842c71ac840626c40db5ade9fd41e6524977fd93d84f59b"} Jan 29 16:58:09 crc kubenswrapper[4813]: I0129 16:58:09.260442 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" podStartSLOduration=4.260427651 podStartE2EDuration="4.260427651s" podCreationTimestamp="2026-01-29 16:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:09.258618659 +0000 UTC m=+1741.745821875" watchObservedRunningTime="2026-01-29 16:58:09.260427651 +0000 UTC m=+1741.747630867" Jan 29 16:58:09 crc kubenswrapper[4813]: I0129 16:58:09.277148 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-z9f8j" podStartSLOduration=6.277133809 podStartE2EDuration="6.277133809s" podCreationTimestamp="2026-01-29 16:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:09.274691279 +0000 UTC m=+1741.761894525" watchObservedRunningTime="2026-01-29 16:58:09.277133809 +0000 UTC m=+1741.764337025" Jan 29 16:58:10 crc kubenswrapper[4813]: I0129 16:58:10.228829 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerStarted","Data":"ac67e0f81edb92924026ba83d83845ced0a2630b257120c5587e9ae32626cff4"} Jan 29 16:58:10 crc kubenswrapper[4813]: E0129 16:58:10.370029 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:58:10 crc kubenswrapper[4813]: E0129 16:58:10.370271 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ldcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abf59a63-bdd8-40d8-a5e0-69e26b57fcba): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:10 crc kubenswrapper[4813]: E0129 16:58:10.382446 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:58:11 crc kubenswrapper[4813]: I0129 16:58:11.239903 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:11 crc kubenswrapper[4813]: I0129 16:58:11.273862 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" podStartSLOduration=7.273842363 podStartE2EDuration="7.273842363s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:11.263132906 +0000 UTC m=+1743.750336132" watchObservedRunningTime="2026-01-29 16:58:11.273842363 +0000 UTC m=+1743.761045579" Jan 29 16:58:17 crc kubenswrapper[4813]: I0129 16:58:17.299196 4813 generic.go:334] "Generic (PLEG): container finished" podID="4d1abced-dba3-4096-b7b5-f9f17fe32d90" containerID="a661203ef1d1c5865964e620ae1da6ba70aeb836bd04db252fdf5da8043001c2" exitCode=0 Jan 29 16:58:17 crc kubenswrapper[4813]: I0129 16:58:17.299298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z9f8j" event={"ID":"4d1abced-dba3-4096-b7b5-f9f17fe32d90","Type":"ContainerDied","Data":"a661203ef1d1c5865964e620ae1da6ba70aeb836bd04db252fdf5da8043001c2"} Jan 29 16:58:19 crc kubenswrapper[4813]: I0129 16:58:19.678282 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:58:19 crc kubenswrapper[4813]: I0129 16:58:19.745430 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:58:19 crc kubenswrapper[4813]: I0129 16:58:19.745685 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="dnsmasq-dns" containerID="cri-o://102440e0d37d695a951d8334372eb2aacd50d381aa2bdfa36a20872e2dd460df" gracePeriod=10 Jan 29 16:58:19 crc kubenswrapper[4813]: I0129 16:58:19.939967 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.016095 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle\") pod \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.016179 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts\") pod \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.017077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data\") pod \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.017191 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhn2\" (UniqueName: \"kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2\") pod \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\" (UID: \"4d1abced-dba3-4096-b7b5-f9f17fe32d90\") " Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.021781 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2" (OuterVolumeSpecName: "kube-api-access-wlhn2") pod "4d1abced-dba3-4096-b7b5-f9f17fe32d90" (UID: "4d1abced-dba3-4096-b7b5-f9f17fe32d90"). InnerVolumeSpecName "kube-api-access-wlhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.024026 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts" (OuterVolumeSpecName: "scripts") pod "4d1abced-dba3-4096-b7b5-f9f17fe32d90" (UID: "4d1abced-dba3-4096-b7b5-f9f17fe32d90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.052996 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data" (OuterVolumeSpecName: "config-data") pod "4d1abced-dba3-4096-b7b5-f9f17fe32d90" (UID: "4d1abced-dba3-4096-b7b5-f9f17fe32d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.060418 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1abced-dba3-4096-b7b5-f9f17fe32d90" (UID: "4d1abced-dba3-4096-b7b5-f9f17fe32d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.119517 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhn2\" (UniqueName: \"kubernetes.io/projected/4d1abced-dba3-4096-b7b5-f9f17fe32d90-kube-api-access-wlhn2\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.119561 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.119575 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.119587 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d1abced-dba3-4096-b7b5-f9f17fe32d90-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.242176 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:58:20 crc kubenswrapper[4813]: E0129 16:58:20.242435 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.341639 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z9f8j" event={"ID":"4d1abced-dba3-4096-b7b5-f9f17fe32d90","Type":"ContainerDied","Data":"10478f64da8aed51aeec7dbad5b0ebb5d2e6f9fb744035a80e958e5fc43b742a"} Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.341681 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10478f64da8aed51aeec7dbad5b0ebb5d2e6f9fb744035a80e958e5fc43b742a" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.341734 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z9f8j" Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.346053 4813 generic.go:334] "Generic (PLEG): container finished" podID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerID="102440e0d37d695a951d8334372eb2aacd50d381aa2bdfa36a20872e2dd460df" exitCode=0 Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.346103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" event={"ID":"5eca4c37-f9b2-4941-95da-46b351fb6616","Type":"ContainerDied","Data":"102440e0d37d695a951d8334372eb2aacd50d381aa2bdfa36a20872e2dd460df"} Jan 29 16:58:20 crc kubenswrapper[4813]: I0129 16:58:20.569093 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: connect: connection refused" Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.128797 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.145782 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.358465 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerStarted","Data":"04deb07ff0d546ef52335b94b3b5bc46f8d6b1014f8dd5cd4d89227ca4403b91"} Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.849889 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959426 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959514 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959670 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9sqn\" (UniqueName: \"kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959751 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.959937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config\") pod \"5eca4c37-f9b2-4941-95da-46b351fb6616\" (UID: \"5eca4c37-f9b2-4941-95da-46b351fb6616\") " Jan 29 16:58:21 crc kubenswrapper[4813]: I0129 16:58:21.971164 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn" (OuterVolumeSpecName: "kube-api-access-z9sqn") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "kube-api-access-z9sqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.018446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config" (OuterVolumeSpecName: "config") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.019785 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.023955 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.031159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.031888 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5eca4c37-f9b2-4941-95da-46b351fb6616" (UID: "5eca4c37-f9b2-4941-95da-46b351fb6616"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063416 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9sqn\" (UniqueName: \"kubernetes.io/projected/5eca4c37-f9b2-4941-95da-46b351fb6616-kube-api-access-z9sqn\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063451 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063463 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063472 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063480 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.063490 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eca4c37-f9b2-4941-95da-46b351fb6616-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.368472 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" event={"ID":"5eca4c37-f9b2-4941-95da-46b351fb6616","Type":"ContainerDied","Data":"df3cad68c6016399543c3eb072369d1de26adcdf58927b051c7f6f9ed443a35d"} Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.368522 4813 scope.go:117] "RemoveContainer" containerID="102440e0d37d695a951d8334372eb2aacd50d381aa2bdfa36a20872e2dd460df" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.368670 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-dbbcf" Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.401618 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:58:22 crc kubenswrapper[4813]: I0129 16:58:22.409402 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-dbbcf"] Jan 29 16:58:23 crc kubenswrapper[4813]: I0129 16:58:23.692838 4813 scope.go:117] "RemoveContainer" containerID="8bcb5096e1c226b23a5e530360f64d61c566d065e4957348e0f63fe2d9ec9afb" Jan 29 16:58:24 crc kubenswrapper[4813]: I0129 16:58:24.250903 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" path="/var/lib/kubelet/pods/5eca4c37-f9b2-4941-95da-46b351fb6616/volumes" Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.400201 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd20e0a1-0753-4b66-8141-2a90c60e69b4","Type":"ContainerStarted","Data":"8be814978621eedf005866f3694f7a50546b69755d6c9eefedcd8b5807df160d"} Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.400272 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8be814978621eedf005866f3694f7a50546b69755d6c9eefedcd8b5807df160d" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.406805 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerStarted","Data":"50c8a2187de8f439620717ad6a2f33258dbc30f2042312dab57dda1313718312"} Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.407019 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-log" containerID="cri-o://04deb07ff0d546ef52335b94b3b5bc46f8d6b1014f8dd5cd4d89227ca4403b91" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.407338 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-api" containerID="cri-o://50c8a2187de8f439620717ad6a2f33258dbc30f2042312dab57dda1313718312" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.416140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerStarted","Data":"fbb6507e170f782195f5c0fadfb0de8cc79b69206586f9cf9418fe66effe54d2"} Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.416191 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerStarted","Data":"fc4a63ed93d0814ecae1b325fb4b8003c7224a638feb506855766ff51677cef4"} Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.416308 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-log" containerID="cri-o://fc4a63ed93d0814ecae1b325fb4b8003c7224a638feb506855766ff51677cef4" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.416418 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-metadata" containerID="cri-o://fbb6507e170f782195f5c0fadfb0de8cc79b69206586f9cf9418fe66effe54d2" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.424764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9cfd76c-dc2f-496a-b9d0-307091c5c709","Type":"ContainerStarted","Data":"d8977588dd469783a9710c5ed582143e331469ea14eebba8456b746d34764c5c"} Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.424894 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" containerName="nova-scheduler-scheduler" containerID="cri-o://d8977588dd469783a9710c5ed582143e331469ea14eebba8456b746d34764c5c" gracePeriod=30 Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.459593 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.29920953 podStartE2EDuration="21.459572355s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="2026-01-29 16:58:05.358797072 +0000 UTC m=+1737.846000288" lastFinishedPulling="2026-01-29 16:58:24.519159897 +0000 UTC m=+1757.006363113" observedRunningTime="2026-01-29 16:58:25.458823644 +0000 UTC m=+1757.946026860" watchObservedRunningTime="2026-01-29 16:58:25.459572355 +0000 UTC m=+1757.946775591" Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.467470 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7512414339999998 podStartE2EDuration="21.467450411s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="2026-01-29 16:58:05.123607838 +0000 UTC m=+1737.610811054" lastFinishedPulling="2026-01-29 16:58:23.839816815 +0000 UTC m=+1756.327020031" observedRunningTime="2026-01-29 16:58:25.424413958 +0000 UTC m=+1757.911617174" watchObservedRunningTime="2026-01-29 16:58:25.467450411 +0000 UTC m=+1757.954653627" Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.478921 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.854728892 podStartE2EDuration="21.478903409s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="2026-01-29 16:58:05.594950984 +0000 UTC m=+1738.082154200" lastFinishedPulling="2026-01-29 16:58:20.219125501 +0000 UTC m=+1752.706328717" observedRunningTime="2026-01-29 16:58:25.478830816 +0000 UTC m=+1757.966034032" watchObservedRunningTime="2026-01-29 16:58:25.478903409 +0000 UTC m=+1757.966106615" Jan 29 16:58:25 crc kubenswrapper[4813]: I0129 16:58:25.497901 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.878503967 podStartE2EDuration="21.497882302s" podCreationTimestamp="2026-01-29 16:58:04 +0000 UTC" firstStartedPulling="2026-01-29 16:58:05.56721914 +0000 UTC m=+1738.054422366" lastFinishedPulling="2026-01-29 16:58:24.186597475 +0000 UTC m=+1756.673800701" observedRunningTime="2026-01-29 16:58:25.494908817 +0000 UTC m=+1757.982112033" watchObservedRunningTime="2026-01-29 16:58:25.497882302 +0000 UTC m=+1757.985085518" Jan 29 16:58:26 crc kubenswrapper[4813]: E0129 16:58:26.241400 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.438940 4813 generic.go:334] "Generic (PLEG): container finished" podID="57b89297-7334-4e26-bebe-cdd483350ace" containerID="fbb6507e170f782195f5c0fadfb0de8cc79b69206586f9cf9418fe66effe54d2" exitCode=0 Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.438977 4813 generic.go:334] "Generic (PLEG): container finished" podID="57b89297-7334-4e26-bebe-cdd483350ace" containerID="fc4a63ed93d0814ecae1b325fb4b8003c7224a638feb506855766ff51677cef4" exitCode=143 Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.439087 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerDied","Data":"fbb6507e170f782195f5c0fadfb0de8cc79b69206586f9cf9418fe66effe54d2"} Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.439132 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerDied","Data":"fc4a63ed93d0814ecae1b325fb4b8003c7224a638feb506855766ff51677cef4"} Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.447313 4813 generic.go:334] "Generic (PLEG): container finished" podID="c15df58c-df33-4aee-943d-411afa3be3b7" containerID="50c8a2187de8f439620717ad6a2f33258dbc30f2042312dab57dda1313718312" exitCode=0 Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.447346 4813 generic.go:334] "Generic (PLEG): container finished" podID="c15df58c-df33-4aee-943d-411afa3be3b7" containerID="04deb07ff0d546ef52335b94b3b5bc46f8d6b1014f8dd5cd4d89227ca4403b91" exitCode=143 Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.447367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerDied","Data":"50c8a2187de8f439620717ad6a2f33258dbc30f2042312dab57dda1313718312"} Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.447391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerDied","Data":"04deb07ff0d546ef52335b94b3b5bc46f8d6b1014f8dd5cd4d89227ca4403b91"} Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.698073 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.796522 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data\") pod \"57b89297-7334-4e26-bebe-cdd483350ace\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.796571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs\") pod \"57b89297-7334-4e26-bebe-cdd483350ace\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.796756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle\") pod \"57b89297-7334-4e26-bebe-cdd483350ace\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.796813 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv2fl\" (UniqueName: \"kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl\") pod \"57b89297-7334-4e26-bebe-cdd483350ace\" (UID: \"57b89297-7334-4e26-bebe-cdd483350ace\") " Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.798997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs" (OuterVolumeSpecName: "logs") pod "57b89297-7334-4e26-bebe-cdd483350ace" (UID: "57b89297-7334-4e26-bebe-cdd483350ace"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.802755 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl" (OuterVolumeSpecName: "kube-api-access-zv2fl") pod "57b89297-7334-4e26-bebe-cdd483350ace" (UID: "57b89297-7334-4e26-bebe-cdd483350ace"). InnerVolumeSpecName "kube-api-access-zv2fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.827280 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data" (OuterVolumeSpecName: "config-data") pod "57b89297-7334-4e26-bebe-cdd483350ace" (UID: "57b89297-7334-4e26-bebe-cdd483350ace"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.832378 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57b89297-7334-4e26-bebe-cdd483350ace" (UID: "57b89297-7334-4e26-bebe-cdd483350ace"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.891576 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.899235 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.899267 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv2fl\" (UniqueName: \"kubernetes.io/projected/57b89297-7334-4e26-bebe-cdd483350ace-kube-api-access-zv2fl\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.899281 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57b89297-7334-4e26-bebe-cdd483350ace-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:26 crc kubenswrapper[4813]: I0129 16:58:26.899295 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57b89297-7334-4e26-bebe-cdd483350ace-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.000136 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hkc\" (UniqueName: \"kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc\") pod \"c15df58c-df33-4aee-943d-411afa3be3b7\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.000203 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs\") pod \"c15df58c-df33-4aee-943d-411afa3be3b7\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.000317 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data\") pod \"c15df58c-df33-4aee-943d-411afa3be3b7\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.000417 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle\") pod \"c15df58c-df33-4aee-943d-411afa3be3b7\" (UID: \"c15df58c-df33-4aee-943d-411afa3be3b7\") " Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.000614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs" (OuterVolumeSpecName: "logs") pod "c15df58c-df33-4aee-943d-411afa3be3b7" (UID: "c15df58c-df33-4aee-943d-411afa3be3b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.001394 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c15df58c-df33-4aee-943d-411afa3be3b7-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.008469 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc" (OuterVolumeSpecName: "kube-api-access-j7hkc") pod "c15df58c-df33-4aee-943d-411afa3be3b7" (UID: "c15df58c-df33-4aee-943d-411afa3be3b7"). InnerVolumeSpecName "kube-api-access-j7hkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.024875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data" (OuterVolumeSpecName: "config-data") pod "c15df58c-df33-4aee-943d-411afa3be3b7" (UID: "c15df58c-df33-4aee-943d-411afa3be3b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.035660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c15df58c-df33-4aee-943d-411afa3be3b7" (UID: "c15df58c-df33-4aee-943d-411afa3be3b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.103230 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.103263 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hkc\" (UniqueName: \"kubernetes.io/projected/c15df58c-df33-4aee-943d-411afa3be3b7-kube-api-access-j7hkc\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.103275 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15df58c-df33-4aee-943d-411afa3be3b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.457766 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.457764 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c15df58c-df33-4aee-943d-411afa3be3b7","Type":"ContainerDied","Data":"0e8a69065b8fd5ef6c0c1346b57999dc8c2fc87ed0fcc83835b9765b08c490a1"} Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.458268 4813 scope.go:117] "RemoveContainer" containerID="50c8a2187de8f439620717ad6a2f33258dbc30f2042312dab57dda1313718312" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.464020 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"57b89297-7334-4e26-bebe-cdd483350ace","Type":"ContainerDied","Data":"948475c9cf1b87f9f24c66dad9b93a1ddfd7f699059bf677c1521f6650a2ee1f"} Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.464164 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.487948 4813 scope.go:117] "RemoveContainer" containerID="04deb07ff0d546ef52335b94b3b5bc46f8d6b1014f8dd5cd4d89227ca4403b91" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.520452 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.530516 4813 scope.go:117] "RemoveContainer" containerID="fbb6507e170f782195f5c0fadfb0de8cc79b69206586f9cf9418fe66effe54d2" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.540754 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.556211 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.566874 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.585511 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586019 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-log" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586045 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-log" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586061 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-api" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586069 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-api" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586081 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="init" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586088 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="init" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586124 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-metadata" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586132 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-metadata" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586146 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="dnsmasq-dns" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586154 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="dnsmasq-dns" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586176 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-log" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586184 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-log" Jan 29 16:58:27 crc kubenswrapper[4813]: E0129 16:58:27.586199 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1abced-dba3-4096-b7b5-f9f17fe32d90" containerName="nova-manage" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586205 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1abced-dba3-4096-b7b5-f9f17fe32d90" containerName="nova-manage" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586414 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1abced-dba3-4096-b7b5-f9f17fe32d90" containerName="nova-manage" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586438 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-metadata" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586451 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b89297-7334-4e26-bebe-cdd483350ace" containerName="nova-metadata-log" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586463 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-api" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586482 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eca4c37-f9b2-4941-95da-46b351fb6616" containerName="dnsmasq-dns" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.586495 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" containerName="nova-api-log" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.587700 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.589191 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.589734 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.591389 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.592796 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.592986 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.598263 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.614041 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.620135 4813 scope.go:117] "RemoveContainer" containerID="fc4a63ed93d0814ecae1b325fb4b8003c7224a638feb506855766ff51677cef4" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720191 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqghv\" (UniqueName: \"kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720262 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720328 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbv85\" (UniqueName: \"kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720387 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720439 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720528 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720582 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720708 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.720766 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822412 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqghv\" (UniqueName: \"kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822561 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbv85\" (UniqueName: \"kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822654 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822752 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822816 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.822866 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.824056 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.830523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.831472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.831768 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.833960 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.834622 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.835472 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.849685 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqghv\" (UniqueName: \"kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv\") pod \"nova-api-0\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " pod="openstack/nova-api-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.858952 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbv85\" (UniqueName: \"kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85\") pod \"nova-metadata-0\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.918255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:58:27 crc kubenswrapper[4813]: I0129 16:58:27.932210 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:28 crc kubenswrapper[4813]: I0129 16:58:28.249916 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b89297-7334-4e26-bebe-cdd483350ace" path="/var/lib/kubelet/pods/57b89297-7334-4e26-bebe-cdd483350ace/volumes" Jan 29 16:58:28 crc kubenswrapper[4813]: I0129 16:58:28.250799 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15df58c-df33-4aee-943d-411afa3be3b7" path="/var/lib/kubelet/pods/c15df58c-df33-4aee-943d-411afa3be3b7/volumes" Jan 29 16:58:28 crc kubenswrapper[4813]: W0129 16:58:28.527535 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6bfd8d9_f27b_407b_a8a4_3592178a6b3a.slice/crio-8aeeb3802dcdf3879e58084e1810024eae489e29c1f69979338fb4d2c9b9c9c3 WatchSource:0}: Error finding container 8aeeb3802dcdf3879e58084e1810024eae489e29c1f69979338fb4d2c9b9c9c3: Status 404 returned error can't find the container with id 8aeeb3802dcdf3879e58084e1810024eae489e29c1f69979338fb4d2c9b9c9c3 Jan 29 16:58:28 crc kubenswrapper[4813]: I0129 16:58:28.527921 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:58:28 crc kubenswrapper[4813]: I0129 16:58:28.612032 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:28 crc kubenswrapper[4813]: W0129 16:58:28.631801 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b248e9_5c4d_41c4_bfd1_7de0cd97fdde.slice/crio-b94aaacd317ebe41be004a97051a849c4523fb68b0837793f058cef76e6bf9df WatchSource:0}: Error finding container b94aaacd317ebe41be004a97051a849c4523fb68b0837793f058cef76e6bf9df: Status 404 returned error can't find the container with id b94aaacd317ebe41be004a97051a849c4523fb68b0837793f058cef76e6bf9df Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.472080 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.491695 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerStarted","Data":"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.491744 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerStarted","Data":"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.491759 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerStarted","Data":"8aeeb3802dcdf3879e58084e1810024eae489e29c1f69979338fb4d2c9b9c9c3"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.494762 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerStarted","Data":"95ce28c2012430ea46bed2ac91af7c9a64fa871415bfe374e019c00cd43ba864"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.494801 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerStarted","Data":"bf9be73c4eff37e39800921bb47b06f30caca88bbcd684d243c274ab2bb1c723"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.494812 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerStarted","Data":"b94aaacd317ebe41be004a97051a849c4523fb68b0837793f058cef76e6bf9df"} Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.542986 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.542964608 podStartE2EDuration="2.542964608s" podCreationTimestamp="2026-01-29 16:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:29.513524365 +0000 UTC m=+1762.000727581" watchObservedRunningTime="2026-01-29 16:58:29.542964608 +0000 UTC m=+1762.030167824" Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.544150 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.544144712 podStartE2EDuration="2.544144712s" podCreationTimestamp="2026-01-29 16:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:29.535817974 +0000 UTC m=+1762.023021200" watchObservedRunningTime="2026-01-29 16:58:29.544144712 +0000 UTC m=+1762.031347928" Jan 29 16:58:29 crc kubenswrapper[4813]: I0129 16:58:29.874548 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:58:32 crc kubenswrapper[4813]: I0129 16:58:32.918582 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:32 crc kubenswrapper[4813]: I0129 16:58:32.918968 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.240577 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:58:35 crc kubenswrapper[4813]: E0129 16:58:35.241513 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.281474 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.285428 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.302724 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.365618 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.365681 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.365758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxc4h\" (UniqueName: \"kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.467965 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.468018 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.468046 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxc4h\" (UniqueName: \"kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.468530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.468642 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.492057 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxc4h\" (UniqueName: \"kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h\") pod \"certified-operators-r5vqz\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:35 crc kubenswrapper[4813]: I0129 16:58:35.624273 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:58:36 crc kubenswrapper[4813]: I0129 16:58:36.170018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:58:36 crc kubenswrapper[4813]: I0129 16:58:36.559699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerStarted","Data":"0b35263001ec08ceb3dff992adecc18faba5a6c6ae1f69e6b54908e70d580a01"} Jan 29 16:58:37 crc kubenswrapper[4813]: I0129 16:58:37.918849 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:58:37 crc kubenswrapper[4813]: I0129 16:58:37.919321 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:58:37 crc kubenswrapper[4813]: I0129 16:58:37.933258 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:37 crc kubenswrapper[4813]: I0129 16:58:37.933608 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:58:38 crc kubenswrapper[4813]: I0129 16:58:38.270346 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.395911 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.396091 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ldcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(abf59a63-bdd8-40d8-a5e0-69e26b57fcba): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.397311 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" Jan 29 16:58:38 crc kubenswrapper[4813]: I0129 16:58:38.581949 4813 generic.go:334] "Generic (PLEG): container finished" podID="56e11034-07d0-4092-9457-6af779578ad2" containerID="a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef" exitCode=0 Jan 29 16:58:38 crc kubenswrapper[4813]: I0129 16:58:38.582846 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerDied","Data":"a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef"} Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.705504 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.705679 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxc4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r5vqz_openshift-marketplace(56e11034-07d0-4092-9457-6af779578ad2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:38 crc kubenswrapper[4813]: E0129 16:58:38.706847 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-r5vqz" podUID="56e11034-07d0-4092-9457-6af779578ad2" Jan 29 16:58:38 crc kubenswrapper[4813]: I0129 16:58:38.933242 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:38 crc kubenswrapper[4813]: I0129 16:58:38.933242 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:39 crc kubenswrapper[4813]: I0129 16:58:39.016366 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:39 crc kubenswrapper[4813]: I0129 16:58:39.016398 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:58:39 crc kubenswrapper[4813]: E0129 16:58:39.594061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r5vqz" podUID="56e11034-07d0-4092-9457-6af779578ad2" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.923626 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.927699 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.929987 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.936866 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.937290 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.937349 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:58:47 crc kubenswrapper[4813]: I0129 16:58:47.940602 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.248455 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:58:48 crc kubenswrapper[4813]: E0129 16:58:48.248729 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.669757 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.673445 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.675429 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.875142 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.876962 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:48 crc kubenswrapper[4813]: I0129 16:58:48.899769 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012012 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012080 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012161 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012192 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012228 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2cb\" (UniqueName: \"kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.012282 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113584 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113632 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113692 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113760 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113786 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.113852 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2cb\" (UniqueName: \"kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.115086 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.115566 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.115657 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.116246 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.116262 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.173869 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2cb\" (UniqueName: \"kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb\") pod \"dnsmasq-dns-fcd6f8f8f-nvszj\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.202519 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:49 crc kubenswrapper[4813]: W0129 16:58:49.823334 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ce996df_c4a1_431e_bae6_d16dfe1491f0.slice/crio-0a64f9aa98ed07358c1fd932b64c4dd654815ceb0b543ac7ece797e9e25a9fc3 WatchSource:0}: Error finding container 0a64f9aa98ed07358c1fd932b64c4dd654815ceb0b543ac7ece797e9e25a9fc3: Status 404 returned error can't find the container with id 0a64f9aa98ed07358c1fd932b64c4dd654815ceb0b543ac7ece797e9e25a9fc3 Jan 29 16:58:49 crc kubenswrapper[4813]: I0129 16:58:49.836333 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 16:58:50 crc kubenswrapper[4813]: I0129 16:58:50.688771 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerID="812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f" exitCode=0 Jan 29 16:58:50 crc kubenswrapper[4813]: I0129 16:58:50.688882 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" event={"ID":"1ce996df-c4a1-431e-bae6-d16dfe1491f0","Type":"ContainerDied","Data":"812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f"} Jan 29 16:58:50 crc kubenswrapper[4813]: I0129 16:58:50.689384 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" event={"ID":"1ce996df-c4a1-431e-bae6-d16dfe1491f0","Type":"ContainerStarted","Data":"0a64f9aa98ed07358c1fd932b64c4dd654815ceb0b543ac7ece797e9e25a9fc3"} Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.219331 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.219964 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-central-agent" containerID="cri-o://490ac85d81cfea520891331a54733150eb728d892ebcb776a1523b259e55b64a" gracePeriod=30 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.220122 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="sg-core" containerID="cri-o://d52e5f1aaf5199dbe36f48502f5e4a65ec34d06d868d7fc7bff13b62d47b5dc1" gracePeriod=30 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.220899 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-notification-agent" containerID="cri-o://50016b2252af57f08f2ad8a67b5fece184b8025f38be1fa6b56c75a51813814b" gracePeriod=30 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.703476 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.704204 4813 generic.go:334] "Generic (PLEG): container finished" podID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerID="d52e5f1aaf5199dbe36f48502f5e4a65ec34d06d868d7fc7bff13b62d47b5dc1" exitCode=2 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.704244 4813 generic.go:334] "Generic (PLEG): container finished" podID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerID="490ac85d81cfea520891331a54733150eb728d892ebcb776a1523b259e55b64a" exitCode=0 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.704288 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerDied","Data":"d52e5f1aaf5199dbe36f48502f5e4a65ec34d06d868d7fc7bff13b62d47b5dc1"} Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.704376 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerDied","Data":"490ac85d81cfea520891331a54733150eb728d892ebcb776a1523b259e55b64a"} Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.708351 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-log" containerID="cri-o://bf9be73c4eff37e39800921bb47b06f30caca88bbcd684d243c274ab2bb1c723" gracePeriod=30 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.709006 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" event={"ID":"1ce996df-c4a1-431e-bae6-d16dfe1491f0","Type":"ContainerStarted","Data":"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d"} Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.709067 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-api" containerID="cri-o://95ce28c2012430ea46bed2ac91af7c9a64fa871415bfe374e019c00cd43ba864" gracePeriod=30 Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.709165 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:51 crc kubenswrapper[4813]: I0129 16:58:51.740673 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" podStartSLOduration=3.740653515 podStartE2EDuration="3.740653515s" podCreationTimestamp="2026-01-29 16:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:51.734235641 +0000 UTC m=+1784.221438857" watchObservedRunningTime="2026-01-29 16:58:51.740653515 +0000 UTC m=+1784.227856731" Jan 29 16:58:52 crc kubenswrapper[4813]: I0129 16:58:52.728429 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerID="bf9be73c4eff37e39800921bb47b06f30caca88bbcd684d243c274ab2bb1c723" exitCode=143 Jan 29 16:58:52 crc kubenswrapper[4813]: I0129 16:58:52.728613 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerDied","Data":"bf9be73c4eff37e39800921bb47b06f30caca88bbcd684d243c274ab2bb1c723"} Jan 29 16:58:53 crc kubenswrapper[4813]: E0129 16:58:53.389322 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:58:53 crc kubenswrapper[4813]: E0129 16:58:53.389462 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxc4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r5vqz_openshift-marketplace(56e11034-07d0-4092-9457-6af779578ad2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:58:53 crc kubenswrapper[4813]: E0129 16:58:53.390873 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-r5vqz" podUID="56e11034-07d0-4092-9457-6af779578ad2" Jan 29 16:58:54 crc kubenswrapper[4813]: I0129 16:58:54.755186 4813 generic.go:334] "Generic (PLEG): container finished" podID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerID="50016b2252af57f08f2ad8a67b5fece184b8025f38be1fa6b56c75a51813814b" exitCode=0 Jan 29 16:58:54 crc kubenswrapper[4813]: I0129 16:58:54.755324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerDied","Data":"50016b2252af57f08f2ad8a67b5fece184b8025f38be1fa6b56c75a51813814b"} Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.733849 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.767638 4813 generic.go:334] "Generic (PLEG): container finished" podID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerID="95ce28c2012430ea46bed2ac91af7c9a64fa871415bfe374e019c00cd43ba864" exitCode=0 Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.767710 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerDied","Data":"95ce28c2012430ea46bed2ac91af7c9a64fa871415bfe374e019c00cd43ba864"} Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.769577 4813 generic.go:334] "Generic (PLEG): container finished" podID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" containerID="8be814978621eedf005866f3694f7a50546b69755d6c9eefedcd8b5807df160d" exitCode=137 Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.769638 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd20e0a1-0753-4b66-8141-2a90c60e69b4","Type":"ContainerDied","Data":"8be814978621eedf005866f3694f7a50546b69755d6c9eefedcd8b5807df160d"} Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.771539 4813 generic.go:334] "Generic (PLEG): container finished" podID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" containerID="d8977588dd469783a9710c5ed582143e331469ea14eebba8456b746d34764c5c" exitCode=137 Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.771599 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9cfd76c-dc2f-496a-b9d0-307091c5c709","Type":"ContainerDied","Data":"d8977588dd469783a9710c5ed582143e331469ea14eebba8456b746d34764c5c"} Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.775392 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"abf59a63-bdd8-40d8-a5e0-69e26b57fcba","Type":"ContainerDied","Data":"4525d2a03c1f2cb573a66fafe3d3580f198c72c49b592d16c58ed015a3a634f0"} Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.775460 4813 scope.go:117] "RemoveContainer" containerID="d52e5f1aaf5199dbe36f48502f5e4a65ec34d06d868d7fc7bff13b62d47b5dc1" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.775511 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851180 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851243 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851265 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ldcp\" (UniqueName: \"kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851454 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851489 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.851536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml\") pod \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\" (UID: \"abf59a63-bdd8-40d8-a5e0-69e26b57fcba\") " Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.853601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.853703 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.856954 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts" (OuterVolumeSpecName: "scripts") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.868963 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp" (OuterVolumeSpecName: "kube-api-access-6ldcp") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "kube-api-access-6ldcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.889839 4813 scope.go:117] "RemoveContainer" containerID="50016b2252af57f08f2ad8a67b5fece184b8025f38be1fa6b56c75a51813814b" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.896191 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.913479 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.922840 4813 scope.go:117] "RemoveContainer" containerID="490ac85d81cfea520891331a54733150eb728d892ebcb776a1523b259e55b64a" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.923746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data" (OuterVolumeSpecName: "config-data") pod "abf59a63-bdd8-40d8-a5e0-69e26b57fcba" (UID: "abf59a63-bdd8-40d8-a5e0-69e26b57fcba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.954606 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955016 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ldcp\" (UniqueName: \"kubernetes.io/projected/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-kube-api-access-6ldcp\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955091 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955200 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955327 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955435 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:55 crc kubenswrapper[4813]: I0129 16:58:55.955507 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abf59a63-bdd8-40d8-a5e0-69e26b57fcba-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.034644 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.153878 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.158853 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle\") pod \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.159309 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data\") pod \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.159566 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs\") pod \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.159659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqghv\" (UniqueName: \"kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv\") pod \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\" (UID: \"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.161202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs" (OuterVolumeSpecName: "logs") pod "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" (UID: "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.169791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv" (OuterVolumeSpecName: "kube-api-access-qqghv") pod "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" (UID: "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde"). InnerVolumeSpecName "kube-api-access-qqghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.181162 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.207625 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.221080 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-notification-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.221209 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-notification-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.221254 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="sg-core" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.221264 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="sg-core" Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.221307 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-log" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.221316 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-log" Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.221338 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-central-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.221346 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-central-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.221363 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-api" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.221371 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-api" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.225370 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-notification-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.225412 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="ceilometer-central-agent" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.225426 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" containerName="sg-core" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.225485 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-log" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.225515 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" containerName="nova-api-api" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.247123 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data" (OuterVolumeSpecName: "config-data") pod "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" (UID: "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.248461 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" (UID: "81b248e9-5c4d-41c4-bfd1-7de0cd97fdde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.265221 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.265273 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.265292 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.265309 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqghv\" (UniqueName: \"kubernetes.io/projected/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde-kube-api-access-qqghv\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.270601 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.272306 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.274300 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.274569 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.297170 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf59a63-bdd8-40d8-a5e0-69e26b57fcba" path="/var/lib/kubelet/pods/abf59a63-bdd8-40d8-a5e0-69e26b57fcba/volumes" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474327 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474554 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474616 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474701 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.474731 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzwwd\" (UniqueName: \"kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.560830 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.580149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data\") pod \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.580231 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b46dq\" (UniqueName: \"kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq\") pod \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.581694 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle\") pod \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\" (UID: \"cd20e0a1-0753-4b66-8141-2a90c60e69b4\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.581903 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.581956 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.581983 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.582005 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzwwd\" (UniqueName: \"kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.582094 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.583408 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.587966 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq" (OuterVolumeSpecName: "kube-api-access-b46dq") pod "cd20e0a1-0753-4b66-8141-2a90c60e69b4" (UID: "cd20e0a1-0753-4b66-8141-2a90c60e69b4"). InnerVolumeSpecName "kube-api-access-b46dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.594701 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.594958 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.595273 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.595581 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b46dq\" (UniqueName: \"kubernetes.io/projected/cd20e0a1-0753-4b66-8141-2a90c60e69b4-kube-api-access-b46dq\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.602336 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.610466 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.611777 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.614834 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.639375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data" (OuterVolumeSpecName: "config-data") pod "cd20e0a1-0753-4b66-8141-2a90c60e69b4" (UID: "cd20e0a1-0753-4b66-8141-2a90c60e69b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.640182 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzwwd\" (UniqueName: \"kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd\") pod \"ceilometer-0\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.645986 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd20e0a1-0753-4b66-8141-2a90c60e69b4" (UID: "cd20e0a1-0753-4b66-8141-2a90c60e69b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.655240 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.659781 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.698838 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle\") pod \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.698895 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data\") pod \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.699057 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np69v\" (UniqueName: \"kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v\") pod \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\" (UID: \"a9cfd76c-dc2f-496a-b9d0-307091c5c709\") " Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.699567 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.699591 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20e0a1-0753-4b66-8141-2a90c60e69b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.704486 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v" (OuterVolumeSpecName: "kube-api-access-np69v") pod "a9cfd76c-dc2f-496a-b9d0-307091c5c709" (UID: "a9cfd76c-dc2f-496a-b9d0-307091c5c709"). InnerVolumeSpecName "kube-api-access-np69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.725220 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9cfd76c-dc2f-496a-b9d0-307091c5c709" (UID: "a9cfd76c-dc2f-496a-b9d0-307091c5c709"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.725952 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data" (OuterVolumeSpecName: "config-data") pod "a9cfd76c-dc2f-496a-b9d0-307091c5c709" (UID: "a9cfd76c-dc2f-496a-b9d0-307091c5c709"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.793362 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"81b248e9-5c4d-41c4-bfd1-7de0cd97fdde","Type":"ContainerDied","Data":"b94aaacd317ebe41be004a97051a849c4523fb68b0837793f058cef76e6bf9df"} Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.793415 4813 scope.go:117] "RemoveContainer" containerID="95ce28c2012430ea46bed2ac91af7c9a64fa871415bfe374e019c00cd43ba864" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.793568 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.802965 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cd20e0a1-0753-4b66-8141-2a90c60e69b4","Type":"ContainerDied","Data":"461fdaa6f16b3f6646c66819f1894607e3cd845c61f6b06b238ba6f13711b4fa"} Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.802995 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.805406 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.805437 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cfd76c-dc2f-496a-b9d0-307091c5c709-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.805451 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np69v\" (UniqueName: \"kubernetes.io/projected/a9cfd76c-dc2f-496a-b9d0-307091c5c709-kube-api-access-np69v\") on node \"crc\" DevicePath \"\"" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.810175 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9cfd76c-dc2f-496a-b9d0-307091c5c709","Type":"ContainerDied","Data":"3e7a5cf99e3e164a63b4adc7639d471479f16ab16d0770c1700e1edd72eb59ff"} Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.810282 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.842380 4813 scope.go:117] "RemoveContainer" containerID="bf9be73c4eff37e39800921bb47b06f30caca88bbcd684d243c274ab2bb1c723" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.848171 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.864739 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.881355 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.882174 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.882198 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:56 crc kubenswrapper[4813]: E0129 16:58:56.882234 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" containerName="nova-scheduler-scheduler" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.882243 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" containerName="nova-scheduler-scheduler" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.882460 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" containerName="nova-scheduler-scheduler" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.882494 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.890654 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.890747 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.897876 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.899669 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.900048 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.904037 4813 scope.go:117] "RemoveContainer" containerID="8be814978621eedf005866f3694f7a50546b69755d6c9eefedcd8b5807df160d" Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.913757 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.944488 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.961596 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:56 crc kubenswrapper[4813]: I0129 16:58:56.982914 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:56.998651 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.003876 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.007052 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.008289 4813 scope.go:117] "RemoveContainer" containerID="d8977588dd469783a9710c5ed582143e331469ea14eebba8456b746d34764c5c" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.008380 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.008682 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.009420 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.010056 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.010151 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.015655 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.015758 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.016040 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkhg7\" (UniqueName: \"kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.016068 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.016265 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.016398 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.027698 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.040501 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.117785 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.117857 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4f9\" (UniqueName: \"kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.117899 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.117933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.117957 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118026 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118052 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118095 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118137 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118157 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118215 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkhg7\" (UniqueName: \"kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118243 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118421 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118514 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwxz\" (UniqueName: \"kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.118706 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.122530 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.122554 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.123523 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.130158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.135674 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkhg7\" (UniqueName: \"kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7\") pod \"nova-api-0\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.157542 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220057 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220136 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwxz\" (UniqueName: \"kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220195 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4f9\" (UniqueName: \"kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220226 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220254 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220288 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220309 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.220328 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.223100 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.230213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.232535 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.235845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.236130 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.236171 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.237145 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.240845 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwxz\" (UniqueName: \"kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz\") pod \"nova-scheduler-0\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.241391 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4f9\" (UniqueName: \"kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9\") pod \"nova-cell1-novncproxy-0\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.343427 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.348282 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.689274 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: W0129 16:58:57.694436 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9561de1f_657c_41f6_b46b_182027605f4a.slice/crio-ffb09fc7d07979675072ec3ef32b129de4eb656af04a6ab215a9f7968b9cea45 WatchSource:0}: Error finding container ffb09fc7d07979675072ec3ef32b129de4eb656af04a6ab215a9f7968b9cea45: Status 404 returned error can't find the container with id ffb09fc7d07979675072ec3ef32b129de4eb656af04a6ab215a9f7968b9cea45 Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.822993 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerStarted","Data":"ffb09fc7d07979675072ec3ef32b129de4eb656af04a6ab215a9f7968b9cea45"} Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.824161 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerStarted","Data":"5966bf15a1d90ccb9ff02b8235777415251e4a5bca5a2c131b755a8dc4041d71"} Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.855874 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:58:57 crc kubenswrapper[4813]: W0129 16:58:57.858004 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade16a11_d1df_4706_9439_d0ef93069a66.slice/crio-655343c962efe84e49a43101f496cc72eedb3000cfdcb75daf6d174c4c5d604f WatchSource:0}: Error finding container 655343c962efe84e49a43101f496cc72eedb3000cfdcb75daf6d174c4c5d604f: Status 404 returned error can't find the container with id 655343c962efe84e49a43101f496cc72eedb3000cfdcb75daf6d174c4c5d604f Jan 29 16:58:57 crc kubenswrapper[4813]: I0129 16:58:57.956857 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.256655 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81b248e9-5c4d-41c4-bfd1-7de0cd97fdde" path="/var/lib/kubelet/pods/81b248e9-5c4d-41c4-bfd1-7de0cd97fdde/volumes" Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.257593 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9cfd76c-dc2f-496a-b9d0-307091c5c709" path="/var/lib/kubelet/pods/a9cfd76c-dc2f-496a-b9d0-307091c5c709/volumes" Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.258268 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd20e0a1-0753-4b66-8141-2a90c60e69b4" path="/var/lib/kubelet/pods/cd20e0a1-0753-4b66-8141-2a90c60e69b4/volumes" Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.834749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ade16a11-d1df-4706-9439-d0ef93069a66","Type":"ContainerStarted","Data":"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae"} Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.834803 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ade16a11-d1df-4706-9439-d0ef93069a66","Type":"ContainerStarted","Data":"655343c962efe84e49a43101f496cc72eedb3000cfdcb75daf6d174c4c5d604f"} Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.836236 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86af341f-2786-405a-98a4-bb3d7bc8c155","Type":"ContainerStarted","Data":"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61"} Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.836261 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86af341f-2786-405a-98a4-bb3d7bc8c155","Type":"ContainerStarted","Data":"783b30cd258b28102a8ac51d067ef2cdb45b7cfd5bdcf954d66fc8e74da54a83"} Jan 29 16:58:58 crc kubenswrapper[4813]: I0129 16:58:58.837747 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerStarted","Data":"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358"} Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.204269 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.285321 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.285599 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="dnsmasq-dns" containerID="cri-o://ac67e0f81edb92924026ba83d83845ced0a2630b257120c5587e9ae32626cff4" gracePeriod=10 Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.849672 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerStarted","Data":"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83"} Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.851749 4813 generic.go:334] "Generic (PLEG): container finished" podID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerID="ac67e0f81edb92924026ba83d83845ced0a2630b257120c5587e9ae32626cff4" exitCode=0 Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.851863 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerDied","Data":"ac67e0f81edb92924026ba83d83845ced0a2630b257120c5587e9ae32626cff4"} Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.868418 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.868400203 podStartE2EDuration="3.868400203s" podCreationTimestamp="2026-01-29 16:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:59.86688101 +0000 UTC m=+1792.354084226" watchObservedRunningTime="2026-01-29 16:58:59.868400203 +0000 UTC m=+1792.355603419" Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.890777 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.890761263 podStartE2EDuration="3.890761263s" podCreationTimestamp="2026-01-29 16:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:59.884069392 +0000 UTC m=+1792.371272608" watchObservedRunningTime="2026-01-29 16:58:59.890761263 +0000 UTC m=+1792.377964479" Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.914053 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.91403084 podStartE2EDuration="3.91403084s" podCreationTimestamp="2026-01-29 16:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:58:59.905686921 +0000 UTC m=+1792.392890137" watchObservedRunningTime="2026-01-29 16:58:59.91403084 +0000 UTC m=+1792.401234046" Jan 29 16:58:59 crc kubenswrapper[4813]: I0129 16:58:59.961080 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095186 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095349 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blb8\" (UniqueName: \"kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095432 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095506 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.095616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config\") pod \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\" (UID: \"659e0577-c7b3-46c5-81f8-d7f4633c4a98\") " Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.117746 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8" (OuterVolumeSpecName: "kube-api-access-9blb8") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "kube-api-access-9blb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.152551 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.158623 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.160346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.167178 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config" (OuterVolumeSpecName: "config") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.177506 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "659e0577-c7b3-46c5-81f8-d7f4633c4a98" (UID: "659e0577-c7b3-46c5-81f8-d7f4633c4a98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198220 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198256 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198266 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198277 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198291 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9blb8\" (UniqueName: \"kubernetes.io/projected/659e0577-c7b3-46c5-81f8-d7f4633c4a98-kube-api-access-9blb8\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.198303 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/659e0577-c7b3-46c5-81f8-d7f4633c4a98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.240493 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:59:00 crc kubenswrapper[4813]: E0129 16:59:00.240735 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.869892 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" event={"ID":"659e0577-c7b3-46c5-81f8-d7f4633c4a98","Type":"ContainerDied","Data":"068031a060c5d177f0e4490f656778674eff456e7b284dbb8701aa54dfc08883"} Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.869916 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.869981 4813 scope.go:117] "RemoveContainer" containerID="ac67e0f81edb92924026ba83d83845ced0a2630b257120c5587e9ae32626cff4" Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.895181 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:59:00 crc kubenswrapper[4813]: I0129 16:59:00.904151 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-rrqsr"] Jan 29 16:59:01 crc kubenswrapper[4813]: I0129 16:59:01.323473 4813 scope.go:117] "RemoveContainer" containerID="de18c389e58c6fd95b2366b1c582da0c60264c8ca7eea83dcbc2cc20f6476467" Jan 29 16:59:02 crc kubenswrapper[4813]: I0129 16:59:02.251794 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" path="/var/lib/kubelet/pods/659e0577-c7b3-46c5-81f8-d7f4633c4a98/volumes" Jan 29 16:59:02 crc kubenswrapper[4813]: I0129 16:59:02.344271 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:02 crc kubenswrapper[4813]: I0129 16:59:02.349446 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:59:02 crc kubenswrapper[4813]: I0129 16:59:02.892703 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerStarted","Data":"5f5a0255b32de3bfbc3839d6a30cc333786d3d305b59b44dae2b1fac6564c366"} Jan 29 16:59:04 crc kubenswrapper[4813]: I0129 16:59:04.676674 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-647df7b8c5-rrqsr" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.192:5353: i/o timeout" Jan 29 16:59:05 crc kubenswrapper[4813]: E0129 16:59:05.241055 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r5vqz" podUID="56e11034-07d0-4092-9457-6af779578ad2" Jan 29 16:59:05 crc kubenswrapper[4813]: I0129 16:59:05.924106 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerStarted","Data":"31326c9192bca4879c9d47c42389f6247ae3d703f51209d9887acf47c1a9a945"} Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.224631 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.224934 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.346001 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.349408 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.378719 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.389028 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.961512 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:59:07 crc kubenswrapper[4813]: I0129 16:59:07.977842 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:59:08 crc kubenswrapper[4813]: I0129 16:59:08.238349 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:08 crc kubenswrapper[4813]: I0129 16:59:08.238368 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:59:10 crc kubenswrapper[4813]: E0129 16:59:10.863867 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:59:10 crc kubenswrapper[4813]: E0129 16:59:10.864726 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(23623af2-d480-4e00-ae8d-51da73cee712): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:10 crc kubenswrapper[4813]: E0129 16:59:10.866368 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 16:59:10 crc kubenswrapper[4813]: I0129 16:59:10.967972 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerStarted","Data":"dafb4e0a912def79f44d01aafe94a7de52de4565116ea94454d5ca790eaabad2"} Jan 29 16:59:10 crc kubenswrapper[4813]: E0129 16:59:10.970014 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 16:59:11 crc kubenswrapper[4813]: E0129 16:59:11.985885 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 16:59:13 crc kubenswrapper[4813]: I0129 16:59:13.239937 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:59:13 crc kubenswrapper[4813]: E0129 16:59:13.240199 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.231009 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.232280 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.232684 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.232749 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.239909 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:59:17 crc kubenswrapper[4813]: I0129 16:59:17.240334 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:59:23 crc kubenswrapper[4813]: I0129 16:59:23.075693 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerStarted","Data":"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f"} Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.202267 4813 scope.go:117] "RemoveContainer" containerID="e5b3fbb3afcdba2ef3d6466a8a826cd3607db54b2d63062bc8f8f914902de3d8" Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.258213 4813 scope.go:117] "RemoveContainer" containerID="24bea98423653dbe14915ccf830b210d9c2e824eccc8ccf86943d9691cbecf47" Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.320005 4813 scope.go:117] "RemoveContainer" containerID="d1e6d50b020d56696baa547bab3ba30760ed87a983a01fa3686da6a9c211e646" Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.367411 4813 scope.go:117] "RemoveContainer" containerID="ee0460e20b36b46ac840fc93a263e55239f303f9abfa21e1bb30ac7f1e3db8c9" Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.651075 4813 scope.go:117] "RemoveContainer" containerID="657006dc517868a8d5f18265519f539855679f9f9dc88ebe6e1bed7196b535b3" Jan 29 16:59:24 crc kubenswrapper[4813]: I0129 16:59:24.674035 4813 scope.go:117] "RemoveContainer" containerID="6cbf30d9a6f8e6434b88c8267402579aa71062196797b6ff8c2ef6920d946613" Jan 29 16:59:25 crc kubenswrapper[4813]: I0129 16:59:25.240200 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:59:25 crc kubenswrapper[4813]: E0129 16:59:25.240521 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 16:59:26 crc kubenswrapper[4813]: E0129 16:59:26.360662 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:59:26 crc kubenswrapper[4813]: E0129 16:59:26.361198 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(23623af2-d480-4e00-ae8d-51da73cee712): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:26 crc kubenswrapper[4813]: E0129 16:59:26.362410 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 16:59:35 crc kubenswrapper[4813]: I0129 16:59:35.228142 4813 generic.go:334] "Generic (PLEG): container finished" podID="56e11034-07d0-4092-9457-6af779578ad2" containerID="933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f" exitCode=0 Jan 29 16:59:35 crc kubenswrapper[4813]: I0129 16:59:35.228214 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerDied","Data":"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f"} Jan 29 16:59:36 crc kubenswrapper[4813]: I0129 16:59:36.242992 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 16:59:39 crc kubenswrapper[4813]: I0129 16:59:39.261781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5"} Jan 29 16:59:41 crc kubenswrapper[4813]: E0129 16:59:41.242891 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 16:59:41 crc kubenswrapper[4813]: I0129 16:59:41.284640 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerStarted","Data":"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2"} Jan 29 16:59:41 crc kubenswrapper[4813]: I0129 16:59:41.316202 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5vqz" podStartSLOduration=4.868620241 podStartE2EDuration="1m6.316181794s" podCreationTimestamp="2026-01-29 16:58:35 +0000 UTC" firstStartedPulling="2026-01-29 16:58:38.583500573 +0000 UTC m=+1771.070703789" lastFinishedPulling="2026-01-29 16:59:40.031062126 +0000 UTC m=+1832.518265342" observedRunningTime="2026-01-29 16:59:41.308621408 +0000 UTC m=+1833.795824624" watchObservedRunningTime="2026-01-29 16:59:41.316181794 +0000 UTC m=+1833.803385010" Jan 29 16:59:45 crc kubenswrapper[4813]: I0129 16:59:45.625262 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:45 crc kubenswrapper[4813]: I0129 16:59:45.625964 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:45 crc kubenswrapper[4813]: I0129 16:59:45.677915 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:46 crc kubenswrapper[4813]: I0129 16:59:46.501107 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:46 crc kubenswrapper[4813]: I0129 16:59:46.549719 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:59:48 crc kubenswrapper[4813]: I0129 16:59:48.467836 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5vqz" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="registry-server" containerID="cri-o://6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2" gracePeriod=2 Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.333088 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.468137 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxc4h\" (UniqueName: \"kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h\") pod \"56e11034-07d0-4092-9457-6af779578ad2\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.468289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content\") pod \"56e11034-07d0-4092-9457-6af779578ad2\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.468336 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities\") pod \"56e11034-07d0-4092-9457-6af779578ad2\" (UID: \"56e11034-07d0-4092-9457-6af779578ad2\") " Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.469401 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities" (OuterVolumeSpecName: "utilities") pod "56e11034-07d0-4092-9457-6af779578ad2" (UID: "56e11034-07d0-4092-9457-6af779578ad2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.477138 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h" (OuterVolumeSpecName: "kube-api-access-mxc4h") pod "56e11034-07d0-4092-9457-6af779578ad2" (UID: "56e11034-07d0-4092-9457-6af779578ad2"). InnerVolumeSpecName "kube-api-access-mxc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.487305 4813 generic.go:334] "Generic (PLEG): container finished" podID="56e11034-07d0-4092-9457-6af779578ad2" containerID="6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2" exitCode=0 Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.487347 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerDied","Data":"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2"} Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.487374 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5vqz" event={"ID":"56e11034-07d0-4092-9457-6af779578ad2","Type":"ContainerDied","Data":"0b35263001ec08ceb3dff992adecc18faba5a6c6ae1f69e6b54908e70d580a01"} Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.487390 4813 scope.go:117] "RemoveContainer" containerID="6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.487503 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5vqz" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.521092 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56e11034-07d0-4092-9457-6af779578ad2" (UID: "56e11034-07d0-4092-9457-6af779578ad2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.527297 4813 scope.go:117] "RemoveContainer" containerID="933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.550600 4813 scope.go:117] "RemoveContainer" containerID="a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.570814 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxc4h\" (UniqueName: \"kubernetes.io/projected/56e11034-07d0-4092-9457-6af779578ad2-kube-api-access-mxc4h\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.570867 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.570881 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e11034-07d0-4092-9457-6af779578ad2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.622001 4813 scope.go:117] "RemoveContainer" containerID="6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2" Jan 29 16:59:49 crc kubenswrapper[4813]: E0129 16:59:49.622482 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2\": container with ID starting with 6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2 not found: ID does not exist" containerID="6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.622572 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2"} err="failed to get container status \"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2\": rpc error: code = NotFound desc = could not find container \"6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2\": container with ID starting with 6134537715a992f51bc1d42ffb8300250dc27b379f0bdd3751becba9a5eb9cd2 not found: ID does not exist" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.622600 4813 scope.go:117] "RemoveContainer" containerID="933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f" Jan 29 16:59:49 crc kubenswrapper[4813]: E0129 16:59:49.622880 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f\": container with ID starting with 933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f not found: ID does not exist" containerID="933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.622909 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f"} err="failed to get container status \"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f\": rpc error: code = NotFound desc = could not find container \"933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f\": container with ID starting with 933bda2049d68d41d1ec33d6c7603c079e646995838789b95dc332dee746f85f not found: ID does not exist" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.622925 4813 scope.go:117] "RemoveContainer" containerID="a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef" Jan 29 16:59:49 crc kubenswrapper[4813]: E0129 16:59:49.623201 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef\": container with ID starting with a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef not found: ID does not exist" containerID="a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.623229 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef"} err="failed to get container status \"a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef\": rpc error: code = NotFound desc = could not find container \"a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef\": container with ID starting with a964482868e11a3fe5078a8a4a71caaf8e3bda965be7dd34375473b03234d2ef not found: ID does not exist" Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.825724 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:59:49 crc kubenswrapper[4813]: I0129 16:59:49.833313 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5vqz"] Jan 29 16:59:50 crc kubenswrapper[4813]: I0129 16:59:50.253639 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e11034-07d0-4092-9457-6af779578ad2" path="/var/lib/kubelet/pods/56e11034-07d0-4092-9457-6af779578ad2/volumes" Jan 29 16:59:53 crc kubenswrapper[4813]: E0129 16:59:53.386856 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:59:53 crc kubenswrapper[4813]: E0129 16:59:53.387371 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(23623af2-d480-4e00-ae8d-51da73cee712): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:59:53 crc kubenswrapper[4813]: E0129 16:59:53.388670 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.151442 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv"] Jan 29 17:00:00 crc kubenswrapper[4813]: E0129 17:00:00.152455 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="init" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152473 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="init" Jan 29 17:00:00 crc kubenswrapper[4813]: E0129 17:00:00.152502 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="extract-utilities" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152511 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="extract-utilities" Jan 29 17:00:00 crc kubenswrapper[4813]: E0129 17:00:00.152527 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152535 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4813]: E0129 17:00:00.152551 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="extract-content" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152559 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="extract-content" Jan 29 17:00:00 crc kubenswrapper[4813]: E0129 17:00:00.152572 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="dnsmasq-dns" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152579 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="dnsmasq-dns" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152794 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="659e0577-c7b3-46c5-81f8-d7f4633c4a98" containerName="dnsmasq-dns" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.152818 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e11034-07d0-4092-9457-6af779578ad2" containerName="registry-server" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.153577 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.160617 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.160866 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.168504 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv"] Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.261271 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.261459 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jww7\" (UniqueName: \"kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.261527 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.363805 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.363956 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jww7\" (UniqueName: \"kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.364021 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.365211 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.371004 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.380665 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jww7\" (UniqueName: \"kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7\") pod \"collect-profiles-29495100-s4wnv\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.482286 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:00 crc kubenswrapper[4813]: I0129 17:00:00.951765 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv"] Jan 29 17:00:01 crc kubenswrapper[4813]: I0129 17:00:01.600214 4813 generic.go:334] "Generic (PLEG): container finished" podID="d362809d-d855-4090-aadc-97d90d262870" containerID="3986427147f192b5da23331ec546144b8e5d8a069d3ebab9179bfef10b32c34e" exitCode=0 Jan 29 17:00:01 crc kubenswrapper[4813]: I0129 17:00:01.600284 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" event={"ID":"d362809d-d855-4090-aadc-97d90d262870","Type":"ContainerDied","Data":"3986427147f192b5da23331ec546144b8e5d8a069d3ebab9179bfef10b32c34e"} Jan 29 17:00:01 crc kubenswrapper[4813]: I0129 17:00:01.600513 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" event={"ID":"d362809d-d855-4090-aadc-97d90d262870","Type":"ContainerStarted","Data":"e2dbdbcaa8c4e68799484f6bbde76c3460435885d04f55f437cf42e736940dfe"} Jan 29 17:00:02 crc kubenswrapper[4813]: I0129 17:00:02.942825 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.117289 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume\") pod \"d362809d-d855-4090-aadc-97d90d262870\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.117606 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jww7\" (UniqueName: \"kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7\") pod \"d362809d-d855-4090-aadc-97d90d262870\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.117693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume\") pod \"d362809d-d855-4090-aadc-97d90d262870\" (UID: \"d362809d-d855-4090-aadc-97d90d262870\") " Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.118373 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume" (OuterVolumeSpecName: "config-volume") pod "d362809d-d855-4090-aadc-97d90d262870" (UID: "d362809d-d855-4090-aadc-97d90d262870"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.134483 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d362809d-d855-4090-aadc-97d90d262870" (UID: "d362809d-d855-4090-aadc-97d90d262870"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.134635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7" (OuterVolumeSpecName: "kube-api-access-8jww7") pod "d362809d-d855-4090-aadc-97d90d262870" (UID: "d362809d-d855-4090-aadc-97d90d262870"). InnerVolumeSpecName "kube-api-access-8jww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.220450 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d362809d-d855-4090-aadc-97d90d262870-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.220495 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jww7\" (UniqueName: \"kubernetes.io/projected/d362809d-d855-4090-aadc-97d90d262870-kube-api-access-8jww7\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.220507 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d362809d-d855-4090-aadc-97d90d262870-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.617280 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" event={"ID":"d362809d-d855-4090-aadc-97d90d262870","Type":"ContainerDied","Data":"e2dbdbcaa8c4e68799484f6bbde76c3460435885d04f55f437cf42e736940dfe"} Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.617323 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2dbdbcaa8c4e68799484f6bbde76c3460435885d04f55f437cf42e736940dfe" Jan 29 17:00:03 crc kubenswrapper[4813]: I0129 17:00:03.617347 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495100-s4wnv" Jan 29 17:00:04 crc kubenswrapper[4813]: E0129 17:00:04.240807 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:00:17 crc kubenswrapper[4813]: I0129 17:00:17.766700 4813 generic.go:334] "Generic (PLEG): container finished" podID="1cfb8782-48f0-49b0-a2f8-378b60f304c7" containerID="6d4a5389f11fece56842c71ac840626c40db5ade9fd41e6524977fd93d84f59b" exitCode=0 Jan 29 17:00:17 crc kubenswrapper[4813]: I0129 17:00:17.766820 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" event={"ID":"1cfb8782-48f0-49b0-a2f8-378b60f304c7","Type":"ContainerDied","Data":"6d4a5389f11fece56842c71ac840626c40db5ade9fd41e6524977fd93d84f59b"} Jan 29 17:00:18 crc kubenswrapper[4813]: E0129 17:00:18.261893 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.100772 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.220748 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts\") pod \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.220819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9xfn\" (UniqueName: \"kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn\") pod \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.220945 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data\") pod \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.221052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle\") pod \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\" (UID: \"1cfb8782-48f0-49b0-a2f8-378b60f304c7\") " Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.227360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn" (OuterVolumeSpecName: "kube-api-access-d9xfn") pod "1cfb8782-48f0-49b0-a2f8-378b60f304c7" (UID: "1cfb8782-48f0-49b0-a2f8-378b60f304c7"). InnerVolumeSpecName "kube-api-access-d9xfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.228246 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts" (OuterVolumeSpecName: "scripts") pod "1cfb8782-48f0-49b0-a2f8-378b60f304c7" (UID: "1cfb8782-48f0-49b0-a2f8-378b60f304c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.250661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cfb8782-48f0-49b0-a2f8-378b60f304c7" (UID: "1cfb8782-48f0-49b0-a2f8-378b60f304c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.263034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data" (OuterVolumeSpecName: "config-data") pod "1cfb8782-48f0-49b0-a2f8-378b60f304c7" (UID: "1cfb8782-48f0-49b0-a2f8-378b60f304c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.323767 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.323813 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.323826 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9xfn\" (UniqueName: \"kubernetes.io/projected/1cfb8782-48f0-49b0-a2f8-378b60f304c7-kube-api-access-d9xfn\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.323840 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cfb8782-48f0-49b0-a2f8-378b60f304c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.789626 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" event={"ID":"1cfb8782-48f0-49b0-a2f8-378b60f304c7","Type":"ContainerDied","Data":"0cb810a5574b603c7906038421a991f5cc0f3811f013a3cdbe4ccbd81b0a15f4"} Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.789686 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb810a5574b603c7906038421a991f5cc0f3811f013a3cdbe4ccbd81b0a15f4" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.789806 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cnf4h" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.872784 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:00:19 crc kubenswrapper[4813]: E0129 17:00:19.873162 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d362809d-d855-4090-aadc-97d90d262870" containerName="collect-profiles" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.873180 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d362809d-d855-4090-aadc-97d90d262870" containerName="collect-profiles" Jan 29 17:00:19 crc kubenswrapper[4813]: E0129 17:00:19.873220 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfb8782-48f0-49b0-a2f8-378b60f304c7" containerName="nova-cell1-conductor-db-sync" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.873226 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfb8782-48f0-49b0-a2f8-378b60f304c7" containerName="nova-cell1-conductor-db-sync" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.873409 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfb8782-48f0-49b0-a2f8-378b60f304c7" containerName="nova-cell1-conductor-db-sync" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.873436 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d362809d-d855-4090-aadc-97d90d262870" containerName="collect-profiles" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.873999 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.876309 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 17:00:19 crc kubenswrapper[4813]: I0129 17:00:19.889875 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.036444 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knhz5\" (UniqueName: \"kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.036920 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.036964 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.138639 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.139297 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.139441 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knhz5\" (UniqueName: \"kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.149090 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.149457 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.157628 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knhz5\" (UniqueName: \"kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5\") pod \"nova-cell1-conductor-0\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.191902 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.654375 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:00:20 crc kubenswrapper[4813]: I0129 17:00:20.799630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7baf1369-6dba-465b-aa2f-f518ae175d80","Type":"ContainerStarted","Data":"2920f276e0fe290bc673f2a5eaec6d057de31e08bb0f8eeba63d92add116587b"} Jan 29 17:00:21 crc kubenswrapper[4813]: I0129 17:00:21.811737 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7baf1369-6dba-465b-aa2f-f518ae175d80","Type":"ContainerStarted","Data":"631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593"} Jan 29 17:00:21 crc kubenswrapper[4813]: I0129 17:00:21.812273 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:21 crc kubenswrapper[4813]: I0129 17:00:21.836668 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8366282419999997 podStartE2EDuration="2.836628242s" podCreationTimestamp="2026-01-29 17:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:00:21.827703086 +0000 UTC m=+1874.314906302" watchObservedRunningTime="2026-01-29 17:00:21.836628242 +0000 UTC m=+1874.323831448" Jan 29 17:00:25 crc kubenswrapper[4813]: I0129 17:00:25.235159 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 17:00:25 crc kubenswrapper[4813]: I0129 17:00:25.667151 4813 scope.go:117] "RemoveContainer" containerID="95e1ed9b55c99c7e2bc8ecb98831d34a4e578878550c94782b4a588796b0ec9e" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.334708 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j8nqz"] Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.335807 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.337859 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.343969 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.347276 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j8nqz"] Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.449795 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.450158 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpfgr\" (UniqueName: \"kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.450202 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.450321 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.551938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.552006 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpfgr\" (UniqueName: \"kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.552050 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.552129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.557965 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.558059 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.563085 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.569582 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpfgr\" (UniqueName: \"kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr\") pod \"nova-cell1-cell-mapping-j8nqz\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:26 crc kubenswrapper[4813]: I0129 17:00:26.658189 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:28 crc kubenswrapper[4813]: I0129 17:00:28.597913 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j8nqz"] Jan 29 17:00:28 crc kubenswrapper[4813]: I0129 17:00:28.879893 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j8nqz" event={"ID":"132e3c9c-5650-431b-8778-01be07a47038","Type":"ContainerStarted","Data":"d38688200fdc81a1702950a866899b4b4670339d7a38d48f3b9c0380938d77ec"} Jan 29 17:00:28 crc kubenswrapper[4813]: I0129 17:00:28.880588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j8nqz" event={"ID":"132e3c9c-5650-431b-8778-01be07a47038","Type":"ContainerStarted","Data":"51a8c9d8ad93ffeb4bbd7631087f5fedef13f33022fbd3b5fb2720eeef7df021"} Jan 29 17:00:28 crc kubenswrapper[4813]: I0129 17:00:28.904273 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j8nqz" podStartSLOduration=2.904243366 podStartE2EDuration="2.904243366s" podCreationTimestamp="2026-01-29 17:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:00:28.89635994 +0000 UTC m=+1881.383563156" watchObservedRunningTime="2026-01-29 17:00:28.904243366 +0000 UTC m=+1881.391446582" Jan 29 17:00:33 crc kubenswrapper[4813]: E0129 17:00:33.243450 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:00:33 crc kubenswrapper[4813]: I0129 17:00:33.922143 4813 generic.go:334] "Generic (PLEG): container finished" podID="132e3c9c-5650-431b-8778-01be07a47038" containerID="d38688200fdc81a1702950a866899b4b4670339d7a38d48f3b9c0380938d77ec" exitCode=0 Jan 29 17:00:33 crc kubenswrapper[4813]: I0129 17:00:33.922190 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j8nqz" event={"ID":"132e3c9c-5650-431b-8778-01be07a47038","Type":"ContainerDied","Data":"d38688200fdc81a1702950a866899b4b4670339d7a38d48f3b9c0380938d77ec"} Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.306223 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.419804 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle\") pod \"132e3c9c-5650-431b-8778-01be07a47038\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.419942 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpfgr\" (UniqueName: \"kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr\") pod \"132e3c9c-5650-431b-8778-01be07a47038\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.420039 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts\") pod \"132e3c9c-5650-431b-8778-01be07a47038\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.420148 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data\") pod \"132e3c9c-5650-431b-8778-01be07a47038\" (UID: \"132e3c9c-5650-431b-8778-01be07a47038\") " Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.426527 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr" (OuterVolumeSpecName: "kube-api-access-tpfgr") pod "132e3c9c-5650-431b-8778-01be07a47038" (UID: "132e3c9c-5650-431b-8778-01be07a47038"). InnerVolumeSpecName "kube-api-access-tpfgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.426525 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts" (OuterVolumeSpecName: "scripts") pod "132e3c9c-5650-431b-8778-01be07a47038" (UID: "132e3c9c-5650-431b-8778-01be07a47038"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.448545 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data" (OuterVolumeSpecName: "config-data") pod "132e3c9c-5650-431b-8778-01be07a47038" (UID: "132e3c9c-5650-431b-8778-01be07a47038"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.452411 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "132e3c9c-5650-431b-8778-01be07a47038" (UID: "132e3c9c-5650-431b-8778-01be07a47038"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.522815 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.522899 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpfgr\" (UniqueName: \"kubernetes.io/projected/132e3c9c-5650-431b-8778-01be07a47038-kube-api-access-tpfgr\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.522922 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.522934 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/132e3c9c-5650-431b-8778-01be07a47038-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.942733 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j8nqz" event={"ID":"132e3c9c-5650-431b-8778-01be07a47038","Type":"ContainerDied","Data":"51a8c9d8ad93ffeb4bbd7631087f5fedef13f33022fbd3b5fb2720eeef7df021"} Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.942775 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51a8c9d8ad93ffeb4bbd7631087f5fedef13f33022fbd3b5fb2720eeef7df021" Jan 29 17:00:35 crc kubenswrapper[4813]: I0129 17:00:35.942823 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j8nqz" Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.140394 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.140682 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-log" containerID="cri-o://816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358" gracePeriod=30 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.141202 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-api" containerID="cri-o://4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83" gracePeriod=30 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.157932 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.160335 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerName="nova-scheduler-scheduler" containerID="cri-o://5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" gracePeriod=30 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.188799 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.189493 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" containerID="cri-o://983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363" gracePeriod=30 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.189267 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" containerID="cri-o://445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895" gracePeriod=30 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.954945 4813 generic.go:334] "Generic (PLEG): container finished" podID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerID="445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895" exitCode=143 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.955022 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerDied","Data":"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895"} Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.957796 4813 generic.go:334] "Generic (PLEG): container finished" podID="9561de1f-657c-41f6-b46b-182027605f4a" containerID="816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358" exitCode=143 Jan 29 17:00:36 crc kubenswrapper[4813]: I0129 17:00:36.957825 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerDied","Data":"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358"} Jan 29 17:00:37 crc kubenswrapper[4813]: E0129 17:00:37.349390 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 is running failed: container process not found" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:00:37 crc kubenswrapper[4813]: E0129 17:00:37.350370 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 is running failed: container process not found" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:00:37 crc kubenswrapper[4813]: E0129 17:00:37.350807 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 is running failed: container process not found" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:00:37 crc kubenswrapper[4813]: E0129 17:00:37.350840 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerName="nova-scheduler-scheduler" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.517727 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.667720 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle\") pod \"86af341f-2786-405a-98a4-bb3d7bc8c155\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.668005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwxz\" (UniqueName: \"kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz\") pod \"86af341f-2786-405a-98a4-bb3d7bc8c155\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.668082 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data\") pod \"86af341f-2786-405a-98a4-bb3d7bc8c155\" (UID: \"86af341f-2786-405a-98a4-bb3d7bc8c155\") " Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.675376 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz" (OuterVolumeSpecName: "kube-api-access-2kwxz") pod "86af341f-2786-405a-98a4-bb3d7bc8c155" (UID: "86af341f-2786-405a-98a4-bb3d7bc8c155"). InnerVolumeSpecName "kube-api-access-2kwxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.695683 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86af341f-2786-405a-98a4-bb3d7bc8c155" (UID: "86af341f-2786-405a-98a4-bb3d7bc8c155"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.710770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data" (OuterVolumeSpecName: "config-data") pod "86af341f-2786-405a-98a4-bb3d7bc8c155" (UID: "86af341f-2786-405a-98a4-bb3d7bc8c155"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.770026 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.770087 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwxz\" (UniqueName: \"kubernetes.io/projected/86af341f-2786-405a-98a4-bb3d7bc8c155-kube-api-access-2kwxz\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.770137 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86af341f-2786-405a-98a4-bb3d7bc8c155-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.969963 4813 generic.go:334] "Generic (PLEG): container finished" podID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" exitCode=0 Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.970015 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86af341f-2786-405a-98a4-bb3d7bc8c155","Type":"ContainerDied","Data":"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61"} Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.970047 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86af341f-2786-405a-98a4-bb3d7bc8c155","Type":"ContainerDied","Data":"783b30cd258b28102a8ac51d067ef2cdb45b7cfd5bdcf954d66fc8e74da54a83"} Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.970069 4813 scope.go:117] "RemoveContainer" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.970178 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.996467 4813 scope.go:117] "RemoveContainer" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" Jan 29 17:00:37 crc kubenswrapper[4813]: E0129 17:00:37.996987 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61\": container with ID starting with 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 not found: ID does not exist" containerID="5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61" Jan 29 17:00:37 crc kubenswrapper[4813]: I0129 17:00:37.997025 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61"} err="failed to get container status \"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61\": rpc error: code = NotFound desc = could not find container \"5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61\": container with ID starting with 5867c0b6076400bc585d883d5efddf09dd15db1083f3e8daac0bb2e48a0f3b61 not found: ID does not exist" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.012268 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.025920 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.041380 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:38 crc kubenswrapper[4813]: E0129 17:00:38.042262 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132e3c9c-5650-431b-8778-01be07a47038" containerName="nova-manage" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.042288 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="132e3c9c-5650-431b-8778-01be07a47038" containerName="nova-manage" Jan 29 17:00:38 crc kubenswrapper[4813]: E0129 17:00:38.042340 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerName="nova-scheduler-scheduler" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.042351 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerName="nova-scheduler-scheduler" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.042583 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" containerName="nova-scheduler-scheduler" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.042647 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="132e3c9c-5650-431b-8778-01be07a47038" containerName="nova-manage" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.043514 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.051784 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.052963 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.176882 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.176962 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.177167 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qrd\" (UniqueName: \"kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.251393 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86af341f-2786-405a-98a4-bb3d7bc8c155" path="/var/lib/kubelet/pods/86af341f-2786-405a-98a4-bb3d7bc8c155/volumes" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.279549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.279607 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.279656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qrd\" (UniqueName: \"kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.283763 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.284797 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.295158 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qrd\" (UniqueName: \"kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd\") pod \"nova-scheduler-0\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.363264 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.795689 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:00:38 crc kubenswrapper[4813]: I0129 17:00:38.981131 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bd38907-4473-487b-8f79-85baaca96f00","Type":"ContainerStarted","Data":"3a751ce11281774ab13527a33687fb7838754b1f8dc3d495e303f4eba9c5dc8d"} Jan 29 17:00:39 crc kubenswrapper[4813]: I0129 17:00:39.336862 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:41084->10.217.0.197:8775: read: connection reset by peer" Jan 29 17:00:39 crc kubenswrapper[4813]: I0129 17:00:39.337134 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:41094->10.217.0.197:8775: read: connection reset by peer" Jan 29 17:00:39 crc kubenswrapper[4813]: I0129 17:00:39.847465 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:00:39 crc kubenswrapper[4813]: I0129 17:00:39.852927 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:00:39 crc kubenswrapper[4813]: I0129 17:00:39.997955 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bd38907-4473-487b-8f79-85baaca96f00","Type":"ContainerStarted","Data":"4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc"} Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.001598 4813 generic.go:334] "Generic (PLEG): container finished" podID="9561de1f-657c-41f6-b46b-182027605f4a" containerID="4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83" exitCode=0 Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.001754 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.002368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerDied","Data":"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83"} Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.002438 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9561de1f-657c-41f6-b46b-182027605f4a","Type":"ContainerDied","Data":"ffb09fc7d07979675072ec3ef32b129de4eb656af04a6ab215a9f7968b9cea45"} Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.002461 4813 scope.go:117] "RemoveContainer" containerID="4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.008617 4813 generic.go:334] "Generic (PLEG): container finished" podID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerID="983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363" exitCode=0 Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.008671 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerDied","Data":"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363"} Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.008690 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.008704 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a","Type":"ContainerDied","Data":"8aeeb3802dcdf3879e58084e1810024eae489e29c1f69979338fb4d2c9b9c9c3"} Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.026050 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.026027895 podStartE2EDuration="2.026027895s" podCreationTimestamp="2026-01-29 17:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:00:40.015953997 +0000 UTC m=+1892.503157213" watchObservedRunningTime="2026-01-29 17:00:40.026027895 +0000 UTC m=+1892.513231111" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027477 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027531 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs\") pod \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027675 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs\") pod \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbv85\" (UniqueName: \"kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85\") pod \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle\") pod \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data\") pod \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\" (UID: \"b6bfd8d9-f27b-407b-a8a4-3592178a6b3a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027842 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.027862 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkhg7\" (UniqueName: \"kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.028346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs" (OuterVolumeSpecName: "logs") pod "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" (UID: "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.028880 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data\") pod \"9561de1f-657c-41f6-b46b-182027605f4a\" (UID: \"9561de1f-657c-41f6-b46b-182027605f4a\") " Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.029583 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.031538 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs" (OuterVolumeSpecName: "logs") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.040884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7" (OuterVolumeSpecName: "kube-api-access-pkhg7") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "kube-api-access-pkhg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.048848 4813 scope.go:117] "RemoveContainer" containerID="816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.055835 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85" (OuterVolumeSpecName: "kube-api-access-lbv85") pod "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" (UID: "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a"). InnerVolumeSpecName "kube-api-access-lbv85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.074410 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data" (OuterVolumeSpecName: "config-data") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.105773 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" (UID: "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.110796 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.118264 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data" (OuterVolumeSpecName: "config-data") pod "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" (UID: "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.128694 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" (UID: "b6bfd8d9-f27b-407b-a8a4-3592178a6b3a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132815 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkhg7\" (UniqueName: \"kubernetes.io/projected/9561de1f-657c-41f6-b46b-182027605f4a-kube-api-access-pkhg7\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132858 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132872 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132884 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9561de1f-657c-41f6-b46b-182027605f4a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132899 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132910 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbv85\" (UniqueName: \"kubernetes.io/projected/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-kube-api-access-lbv85\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132921 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.132933 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.155703 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.172249 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9561de1f-657c-41f6-b46b-182027605f4a" (UID: "9561de1f-657c-41f6-b46b-182027605f4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.177826 4813 scope.go:117] "RemoveContainer" containerID="4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.178385 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83\": container with ID starting with 4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83 not found: ID does not exist" containerID="4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.178431 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83"} err="failed to get container status \"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83\": rpc error: code = NotFound desc = could not find container \"4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83\": container with ID starting with 4c64e571f0008374a0039fbc6f900bb333523bfd6cae913308f9471427319a83 not found: ID does not exist" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.178461 4813 scope.go:117] "RemoveContainer" containerID="816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.178756 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358\": container with ID starting with 816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358 not found: ID does not exist" containerID="816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.178795 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358"} err="failed to get container status \"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358\": rpc error: code = NotFound desc = could not find container \"816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358\": container with ID starting with 816dc5b21a6c48a7c06da8d16aa3c2acc375d89f1ca25f4f5f421e11d16d8358 not found: ID does not exist" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.178822 4813 scope.go:117] "RemoveContainer" containerID="983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.221085 4813 scope.go:117] "RemoveContainer" containerID="445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.234375 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.234436 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9561de1f-657c-41f6-b46b-182027605f4a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.260416 4813 scope.go:117] "RemoveContainer" containerID="983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.267242 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363\": container with ID starting with 983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363 not found: ID does not exist" containerID="983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.267307 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363"} err="failed to get container status \"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363\": rpc error: code = NotFound desc = could not find container \"983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363\": container with ID starting with 983a3fe084f4754701943ebcd7dc8cb38aad012ec8f7f099cdb42e218483d363 not found: ID does not exist" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.267342 4813 scope.go:117] "RemoveContainer" containerID="445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.272291 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895\": container with ID starting with 445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895 not found: ID does not exist" containerID="445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.272337 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895"} err="failed to get container status \"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895\": rpc error: code = NotFound desc = could not find container \"445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895\": container with ID starting with 445d16dfa1596ceae3f2b589f0cc09b8f7bf4b0d9ccd0d1648b07e8e6d452895 not found: ID does not exist" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.350892 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.371592 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.382930 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404143 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.404601 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-api" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404621 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-api" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.404642 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404649 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.404672 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404678 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" Jan 29 17:00:40 crc kubenswrapper[4813]: E0129 17:00:40.404688 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-log" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404694 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-log" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404885 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-api" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404906 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-log" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404918 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" containerName="nova-metadata-metadata" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.404925 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9561de1f-657c-41f6-b46b-182027605f4a" containerName="nova-api-log" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.405848 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.409187 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.409862 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.434832 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.467937 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.486900 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.488808 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.498759 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.503665 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.508275 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.517614 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.546913 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.547026 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.547076 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djg7g\" (UniqueName: \"kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.547096 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.547144 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.648683 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.648955 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649062 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86vwf\" (UniqueName: \"kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649298 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649389 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649508 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649622 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649701 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djg7g\" (UniqueName: \"kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649793 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649868 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.649945 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.654726 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.654823 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.666185 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.666653 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djg7g\" (UniqueName: \"kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g\") pod \"nova-metadata-0\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.744705 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.751876 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.751927 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.752011 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.752170 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.752201 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.752264 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86vwf\" (UniqueName: \"kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.752843 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.756024 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.756186 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.756864 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.761695 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.775238 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86vwf\" (UniqueName: \"kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf\") pod \"nova-api-0\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " pod="openstack/nova-api-0" Jan 29 17:00:40 crc kubenswrapper[4813]: I0129 17:00:40.814255 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:00:41 crc kubenswrapper[4813]: I0129 17:00:41.231482 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:00:41 crc kubenswrapper[4813]: W0129 17:00:41.239692 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8892a0d0_88f2_4e0a_aafb_20e04d9e6289.slice/crio-794c7daea23eb33ec52d254a1732370f452682d28020a1e42fce9c727f21e78b WatchSource:0}: Error finding container 794c7daea23eb33ec52d254a1732370f452682d28020a1e42fce9c727f21e78b: Status 404 returned error can't find the container with id 794c7daea23eb33ec52d254a1732370f452682d28020a1e42fce9c727f21e78b Jan 29 17:00:41 crc kubenswrapper[4813]: I0129 17:00:41.350018 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:00:41 crc kubenswrapper[4813]: W0129 17:00:41.360674 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd58049e2_10b0_4b6d_9a9e_d90420a1cecb.slice/crio-0befb1043a9c84c4170f19346e523cf5f821d0577e6a17325262d8cecadd9400 WatchSource:0}: Error finding container 0befb1043a9c84c4170f19346e523cf5f821d0577e6a17325262d8cecadd9400: Status 404 returned error can't find the container with id 0befb1043a9c84c4170f19346e523cf5f821d0577e6a17325262d8cecadd9400 Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.038257 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerStarted","Data":"e05393a1d8d62733f2c992a97a9c85905297b74ae0d29c1c6ef323e9a3b39679"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.038552 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerStarted","Data":"9f57c05f59032c6a2d1a7cccf2fb5f3a52e7578aa8e9376e28838edd405492ff"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.038563 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerStarted","Data":"794c7daea23eb33ec52d254a1732370f452682d28020a1e42fce9c727f21e78b"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.059224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerStarted","Data":"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.059282 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerStarted","Data":"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.059297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerStarted","Data":"0befb1043a9c84c4170f19346e523cf5f821d0577e6a17325262d8cecadd9400"} Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.084792 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.084772466 podStartE2EDuration="2.084772466s" podCreationTimestamp="2026-01-29 17:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:00:42.076154639 +0000 UTC m=+1894.563357865" watchObservedRunningTime="2026-01-29 17:00:42.084772466 +0000 UTC m=+1894.571975682" Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.098890 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.098871259 podStartE2EDuration="2.098871259s" podCreationTimestamp="2026-01-29 17:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:00:42.095473252 +0000 UTC m=+1894.582676468" watchObservedRunningTime="2026-01-29 17:00:42.098871259 +0000 UTC m=+1894.586074475" Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.251413 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9561de1f-657c-41f6-b46b-182027605f4a" path="/var/lib/kubelet/pods/9561de1f-657c-41f6-b46b-182027605f4a/volumes" Jan 29 17:00:42 crc kubenswrapper[4813]: I0129 17:00:42.252075 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bfd8d9-f27b-407b-a8a4-3592178a6b3a" path="/var/lib/kubelet/pods/b6bfd8d9-f27b-407b-a8a4-3592178a6b3a/volumes" Jan 29 17:00:43 crc kubenswrapper[4813]: I0129 17:00:43.363598 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 17:00:44 crc kubenswrapper[4813]: E0129 17:00:44.379992 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 17:00:44 crc kubenswrapper[4813]: E0129 17:00:44.380197 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(23623af2-d480-4e00-ae8d-51da73cee712): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:00:44 crc kubenswrapper[4813]: E0129 17:00:44.381433 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:00:45 crc kubenswrapper[4813]: I0129 17:00:45.746101 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:00:45 crc kubenswrapper[4813]: I0129 17:00:45.746505 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 17:00:48 crc kubenswrapper[4813]: I0129 17:00:48.363997 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 17:00:48 crc kubenswrapper[4813]: I0129 17:00:48.400548 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 17:00:49 crc kubenswrapper[4813]: I0129 17:00:49.393503 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 17:00:50 crc kubenswrapper[4813]: I0129 17:00:50.745162 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:00:50 crc kubenswrapper[4813]: I0129 17:00:50.745224 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 17:00:50 crc kubenswrapper[4813]: I0129 17:00:50.816034 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:00:50 crc kubenswrapper[4813]: I0129 17:00:50.816092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 17:00:51 crc kubenswrapper[4813]: I0129 17:00:51.759382 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:00:51 crc kubenswrapper[4813]: I0129 17:00:51.759410 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:00:51 crc kubenswrapper[4813]: I0129 17:00:51.830320 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:00:51 crc kubenswrapper[4813]: I0129 17:00:51.830320 4813 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 17:00:57 crc kubenswrapper[4813]: E0129 17:00:57.243180 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.147729 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495101-mhdx7"] Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.150819 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.161027 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495101-mhdx7"] Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.292451 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.292529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.292581 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjtp\" (UniqueName: \"kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.293529 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.395981 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.396062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.396100 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjtp\" (UniqueName: \"kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.396248 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.402600 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.410902 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.414459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.417512 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjtp\" (UniqueName: \"kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp\") pod \"keystone-cron-29495101-mhdx7\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.478357 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.751625 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.752386 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.758272 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.822324 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.822963 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.823033 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.827581 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:01:00 crc kubenswrapper[4813]: W0129 17:01:00.921148 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda28a20c4_1155_4d60_b6be_011bb1479366.slice/crio-d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d WatchSource:0}: Error finding container d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d: Status 404 returned error can't find the container with id d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d Jan 29 17:01:00 crc kubenswrapper[4813]: I0129 17:01:00.924290 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495101-mhdx7"] Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.476297 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-mhdx7" event={"ID":"a28a20c4-1155-4d60-b6be-011bb1479366","Type":"ContainerStarted","Data":"4d43b35f24d6bb7052db0c2918b21d113c70aee5d59fd072a7ca4d61d18aaa55"} Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.476350 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-mhdx7" event={"ID":"a28a20c4-1155-4d60-b6be-011bb1479366","Type":"ContainerStarted","Data":"d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d"} Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.476772 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.481929 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.486002 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 17:01:01 crc kubenswrapper[4813]: I0129 17:01:01.494820 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495101-mhdx7" podStartSLOduration=1.494798348 podStartE2EDuration="1.494798348s" podCreationTimestamp="2026-01-29 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 17:01:01.491255037 +0000 UTC m=+1913.978458273" watchObservedRunningTime="2026-01-29 17:01:01.494798348 +0000 UTC m=+1913.982001564" Jan 29 17:01:03 crc kubenswrapper[4813]: I0129 17:01:03.497531 4813 generic.go:334] "Generic (PLEG): container finished" podID="a28a20c4-1155-4d60-b6be-011bb1479366" containerID="4d43b35f24d6bb7052db0c2918b21d113c70aee5d59fd072a7ca4d61d18aaa55" exitCode=0 Jan 29 17:01:03 crc kubenswrapper[4813]: I0129 17:01:03.497687 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-mhdx7" event={"ID":"a28a20c4-1155-4d60-b6be-011bb1479366","Type":"ContainerDied","Data":"4d43b35f24d6bb7052db0c2918b21d113c70aee5d59fd072a7ca4d61d18aaa55"} Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.830182 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.979929 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data\") pod \"a28a20c4-1155-4d60-b6be-011bb1479366\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.979990 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys\") pod \"a28a20c4-1155-4d60-b6be-011bb1479366\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.980015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle\") pod \"a28a20c4-1155-4d60-b6be-011bb1479366\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.980099 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksjtp\" (UniqueName: \"kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp\") pod \"a28a20c4-1155-4d60-b6be-011bb1479366\" (UID: \"a28a20c4-1155-4d60-b6be-011bb1479366\") " Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.986407 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a28a20c4-1155-4d60-b6be-011bb1479366" (UID: "a28a20c4-1155-4d60-b6be-011bb1479366"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:04 crc kubenswrapper[4813]: I0129 17:01:04.990265 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp" (OuterVolumeSpecName: "kube-api-access-ksjtp") pod "a28a20c4-1155-4d60-b6be-011bb1479366" (UID: "a28a20c4-1155-4d60-b6be-011bb1479366"). InnerVolumeSpecName "kube-api-access-ksjtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.011791 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a28a20c4-1155-4d60-b6be-011bb1479366" (UID: "a28a20c4-1155-4d60-b6be-011bb1479366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.031904 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data" (OuterVolumeSpecName: "config-data") pod "a28a20c4-1155-4d60-b6be-011bb1479366" (UID: "a28a20c4-1155-4d60-b6be-011bb1479366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.081549 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.081582 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.081598 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28a20c4-1155-4d60-b6be-011bb1479366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.081618 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksjtp\" (UniqueName: \"kubernetes.io/projected/a28a20c4-1155-4d60-b6be-011bb1479366-kube-api-access-ksjtp\") on node \"crc\" DevicePath \"\"" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.516807 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495101-mhdx7" event={"ID":"a28a20c4-1155-4d60-b6be-011bb1479366","Type":"ContainerDied","Data":"d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d"} Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.516843 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f4e12b3ca5c41e042971f5d6045bccc67ada6608a9e19fd9cfda520ccae51d" Jan 29 17:01:05 crc kubenswrapper[4813]: I0129 17:01:05.516881 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495101-mhdx7" Jan 29 17:01:09 crc kubenswrapper[4813]: E0129 17:01:09.243375 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:01:23 crc kubenswrapper[4813]: E0129 17:01:23.243416 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:01:36 crc kubenswrapper[4813]: E0129 17:01:36.242369 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:01:49 crc kubenswrapper[4813]: E0129 17:01:49.243082 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:00 crc kubenswrapper[4813]: I0129 17:02:00.239892 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:02:00 crc kubenswrapper[4813]: I0129 17:02:00.240251 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:02:00 crc kubenswrapper[4813]: E0129 17:02:00.242474 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:12 crc kubenswrapper[4813]: E0129 17:02:12.364711 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 17:02:12 crc kubenswrapper[4813]: E0129 17:02:12.365447 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzwwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(23623af2-d480-4e00-ae8d-51da73cee712): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:02:12 crc kubenswrapper[4813]: E0129 17:02:12.366659 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:27 crc kubenswrapper[4813]: E0129 17:02:27.244317 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:30 crc kubenswrapper[4813]: I0129 17:02:30.239719 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:02:30 crc kubenswrapper[4813]: I0129 17:02:30.240066 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:02:39 crc kubenswrapper[4813]: E0129 17:02:39.245446 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:50 crc kubenswrapper[4813]: E0129 17:02:50.242975 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:02:58 crc kubenswrapper[4813]: I0129 17:02:58.036906 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vdkq7"] Jan 29 17:02:58 crc kubenswrapper[4813]: I0129 17:02:58.045484 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vdkq7"] Jan 29 17:02:58 crc kubenswrapper[4813]: I0129 17:02:58.252266 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a667f87f-f7c9-4f33-8e88-db86259f3111" path="/var/lib/kubelet/pods/a667f87f-f7c9-4f33-8e88-db86259f3111/volumes" Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.049355 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9248-account-create-update-cpnmh"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.088604 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jxznp"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.098028 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9248-account-create-update-cpnmh"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.107725 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-94cf-account-create-update-d82t2"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.116448 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vlfk2"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.125539 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cf77-account-create-update-w2kw6"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.133949 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-94cf-account-create-update-d82t2"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.143516 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jxznp"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.154639 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cf77-account-create-update-w2kw6"] Jan 29 17:02:59 crc kubenswrapper[4813]: I0129 17:02:59.166648 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vlfk2"] Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.240279 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.240344 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.251059 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ff68719-5a69-4407-80c9-130f7a261c04" path="/var/lib/kubelet/pods/0ff68719-5a69-4407-80c9-130f7a261c04/volumes" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.252208 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2910884b-4b6f-4001-b7a6-cb47ad2b739b" path="/var/lib/kubelet/pods/2910884b-4b6f-4001-b7a6-cb47ad2b739b/volumes" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.252973 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4112ac2a-6403-43a2-81a9-089c9fce1e1c" path="/var/lib/kubelet/pods/4112ac2a-6403-43a2-81a9-089c9fce1e1c/volumes" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.253708 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45176ce9-7d7d-4342-b14f-b4dbf8628b37" path="/var/lib/kubelet/pods/45176ce9-7d7d-4342-b14f-b4dbf8628b37/volumes" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.255158 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ab991e-8fc7-4320-8627-0a1020527696" path="/var/lib/kubelet/pods/69ab991e-8fc7-4320-8627-0a1020527696/volumes" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.255936 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.256981 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.257148 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5" gracePeriod=600 Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.546912 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5" exitCode=0 Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.546968 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5"} Jan 29 17:03:00 crc kubenswrapper[4813]: I0129 17:03:00.547274 4813 scope.go:117] "RemoveContainer" containerID="eb384ff33d9d786cc7e35c082f8a243cfd8a37eb49733d8b751e259e1dadbbdd" Jan 29 17:03:01 crc kubenswrapper[4813]: I0129 17:03:01.556768 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727"} Jan 29 17:03:05 crc kubenswrapper[4813]: E0129 17:03:05.242405 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:03:13 crc kubenswrapper[4813]: I0129 17:03:13.037535 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-z6vxg"] Jan 29 17:03:13 crc kubenswrapper[4813]: I0129 17:03:13.045337 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-z6vxg"] Jan 29 17:03:14 crc kubenswrapper[4813]: I0129 17:03:14.250183 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9f8e8a-1914-4ff8-a02e-044117f535b2" path="/var/lib/kubelet/pods/ec9f8e8a-1914-4ff8-a02e-044117f535b2/volumes" Jan 29 17:03:19 crc kubenswrapper[4813]: E0129 17:03:19.243315 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:03:25 crc kubenswrapper[4813]: I0129 17:03:25.866797 4813 scope.go:117] "RemoveContainer" containerID="ce5c32a667986b42c5850eeeb60cd5f2b431a0fb460affe31e4bed46bb406b73" Jan 29 17:03:25 crc kubenswrapper[4813]: I0129 17:03:25.890819 4813 scope.go:117] "RemoveContainer" containerID="3bf3df4b07dd7f86db3964c8dad4a78e0465b4368f0ee9fa8400066a2ebe7ae3" Jan 29 17:03:25 crc kubenswrapper[4813]: I0129 17:03:25.931636 4813 scope.go:117] "RemoveContainer" containerID="0e267593dfe5d86281e238fd74b2f732badf00023a2da0789f38bd552593e52d" Jan 29 17:03:25 crc kubenswrapper[4813]: I0129 17:03:25.980756 4813 scope.go:117] "RemoveContainer" containerID="a050c06e3172b7342e0c18cd9195f3454e570a16b8cefff22309e38893720db0" Jan 29 17:03:26 crc kubenswrapper[4813]: I0129 17:03:26.009851 4813 scope.go:117] "RemoveContainer" containerID="be9b741f9e8bf46178022b96a89890a0e0e7c7be350924b1f642c3e821364e71" Jan 29 17:03:26 crc kubenswrapper[4813]: I0129 17:03:26.047268 4813 scope.go:117] "RemoveContainer" containerID="2e961f3738f83dea177de8c765c5c2b24916e1ed20046897298f11c87664b7e8" Jan 29 17:03:26 crc kubenswrapper[4813]: I0129 17:03:26.106577 4813 scope.go:117] "RemoveContainer" containerID="c3d111a7d82308b87a5d3cd182c345565bf2ef14847685e0c1e62de5700f0e87" Jan 29 17:03:26 crc kubenswrapper[4813]: I0129 17:03:26.148500 4813 scope.go:117] "RemoveContainer" containerID="c5c687f8aab50f60e671310103be4e09a991eac4e72d7b46abbda47a928b52c0" Jan 29 17:03:26 crc kubenswrapper[4813]: I0129 17:03:26.169931 4813 scope.go:117] "RemoveContainer" containerID="1e5b623683f6cb7ba9c5493b668c94cddb77f58344b1409daac03e8d2ad6df61" Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.041625 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ndpm2"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.055497 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ndpm2"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.070613 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5abd-account-create-update-572vc"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.081106 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-q4cj5"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.089613 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-x847l"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.096332 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8fd5-account-create-update-5d6vp"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.102787 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5abd-account-create-update-572vc"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.109578 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-x847l"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.116492 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8fd5-account-create-update-5d6vp"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.123818 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-q4cj5"] Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.250776 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f70648-8c28-4159-9ae5-284478e0815c" path="/var/lib/kubelet/pods/58f70648-8c28-4159-9ae5-284478e0815c/volumes" Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.252583 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63edd52e-83f3-4a95-9b1d-3085129d0555" path="/var/lib/kubelet/pods/63edd52e-83f3-4a95-9b1d-3085129d0555/volumes" Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.253554 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676e4275-70b7-4ba7-abbc-57cd145d0ff1" path="/var/lib/kubelet/pods/676e4275-70b7-4ba7-abbc-57cd145d0ff1/volumes" Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.254179 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3d995a-ff26-44ea-a0c7-dd1959798b44" path="/var/lib/kubelet/pods/8b3d995a-ff26-44ea-a0c7-dd1959798b44/volumes" Jan 29 17:03:30 crc kubenswrapper[4813]: I0129 17:03:30.255389 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f635f7b-65c8-4f91-8473-56bfd6775987" path="/var/lib/kubelet/pods/9f635f7b-65c8-4f91-8473-56bfd6775987/volumes" Jan 29 17:03:33 crc kubenswrapper[4813]: E0129 17:03:33.241642 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.030763 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6jmvf"] Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.046249 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb83-account-create-update-4sgpf"] Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.057032 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6jmvf"] Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.064695 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cb83-account-create-update-4sgpf"] Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.252336 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9d9351-4988-4044-b47f-de154e889b47" path="/var/lib/kubelet/pods/3a9d9351-4988-4044-b47f-de154e889b47/volumes" Jan 29 17:03:34 crc kubenswrapper[4813]: I0129 17:03:34.253209 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb9308c-c3ab-42c8-8ffb-e813511fe562" path="/var/lib/kubelet/pods/3bb9308c-c3ab-42c8-8ffb-e813511fe562/volumes" Jan 29 17:03:45 crc kubenswrapper[4813]: E0129 17:03:45.243433 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:03:46 crc kubenswrapper[4813]: I0129 17:03:46.046386 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-7w666"] Jan 29 17:03:46 crc kubenswrapper[4813]: I0129 17:03:46.057169 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-7w666"] Jan 29 17:03:46 crc kubenswrapper[4813]: I0129 17:03:46.252437 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346c46a8-5fe9-44d0-882a-6ef6412e6e0d" path="/var/lib/kubelet/pods/346c46a8-5fe9-44d0-882a-6ef6412e6e0d/volumes" Jan 29 17:03:57 crc kubenswrapper[4813]: E0129 17:03:57.243077 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:04:09 crc kubenswrapper[4813]: E0129 17:04:09.242627 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:04:24 crc kubenswrapper[4813]: E0129 17:04:24.243567 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.318259 4813 scope.go:117] "RemoveContainer" containerID="5d34de84fb2abfbb4654d44b445e12d4d418d88954b7a25ac8505eb022196a89" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.361868 4813 scope.go:117] "RemoveContainer" containerID="08482866b1cd8b860fea765deb6f1e75dd02ef7fd225c88c5a2214f17fa836f2" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.390106 4813 scope.go:117] "RemoveContainer" containerID="212be768f897ca64f3d18c739828b8b696cf5563b1d0688911ec89515e813b83" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.438582 4813 scope.go:117] "RemoveContainer" containerID="f609975417f95bdceff025a15be16c408c073985a17477d47096e6d647a7884d" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.480549 4813 scope.go:117] "RemoveContainer" containerID="7152055c87a94a3b725894a406e8e6935ce7638af6f4e8afe49c22fe1ae5e922" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.529614 4813 scope.go:117] "RemoveContainer" containerID="1566217be5fc6671168e1436a1ffe45d1c7fe94af1c506065d72f0cfdaa35a2f" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.582876 4813 scope.go:117] "RemoveContainer" containerID="9588b7fdab2be60f9baa05f0301b5623cf399165b2c320c6fdd3dd4ec4e7c39e" Jan 29 17:04:26 crc kubenswrapper[4813]: I0129 17:04:26.610858 4813 scope.go:117] "RemoveContainer" containerID="999eeaabe251e8d162ef987bc0a172459387bd5c53954b54eebf0107ea211637" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.562370 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:04:27 crc kubenswrapper[4813]: E0129 17:04:27.563205 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28a20c4-1155-4d60-b6be-011bb1479366" containerName="keystone-cron" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.563222 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28a20c4-1155-4d60-b6be-011bb1479366" containerName="keystone-cron" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.563456 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28a20c4-1155-4d60-b6be-011bb1479366" containerName="keystone-cron" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.567371 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.573617 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.669092 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjsk\" (UniqueName: \"kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.669164 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.669314 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.770734 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzjsk\" (UniqueName: \"kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.770801 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.770931 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.771522 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.771591 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.792071 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzjsk\" (UniqueName: \"kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk\") pod \"community-operators-ssmnk\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:27 crc kubenswrapper[4813]: I0129 17:04:27.892468 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:04:28 crc kubenswrapper[4813]: I0129 17:04:28.430476 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:04:29 crc kubenswrapper[4813]: I0129 17:04:29.297391 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerID="e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce" exitCode=0 Jan 29 17:04:29 crc kubenswrapper[4813]: I0129 17:04:29.297445 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerDied","Data":"e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce"} Jan 29 17:04:29 crc kubenswrapper[4813]: I0129 17:04:29.298009 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerStarted","Data":"7b5bd0945c61fbcedc8dff89fcdc3af4907e1593fe1a71e8100ad756254f48af"} Jan 29 17:04:29 crc kubenswrapper[4813]: I0129 17:04:29.300828 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:04:29 crc kubenswrapper[4813]: E0129 17:04:29.442056 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:04:29 crc kubenswrapper[4813]: E0129 17:04:29.442246 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzjsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ssmnk_openshift-marketplace(6f57c049-b9d0-4b2b-9270-ba2319ad0dca): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:04:29 crc kubenswrapper[4813]: E0129 17:04:29.443481 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ssmnk" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" Jan 29 17:04:30 crc kubenswrapper[4813]: E0129 17:04:30.309328 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ssmnk" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" Jan 29 17:04:36 crc kubenswrapper[4813]: E0129 17:04:36.242504 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:04:44 crc kubenswrapper[4813]: E0129 17:04:44.404245 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:04:44 crc kubenswrapper[4813]: E0129 17:04:44.405591 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzjsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ssmnk_openshift-marketplace(6f57c049-b9d0-4b2b-9270-ba2319ad0dca): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:04:44 crc kubenswrapper[4813]: E0129 17:04:44.407358 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-ssmnk" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" Jan 29 17:04:49 crc kubenswrapper[4813]: E0129 17:04:49.243337 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" Jan 29 17:04:56 crc kubenswrapper[4813]: E0129 17:04:56.246290 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ssmnk" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" Jan 29 17:05:00 crc kubenswrapper[4813]: I0129 17:05:00.239770 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:05:00 crc kubenswrapper[4813]: I0129 17:05:00.240233 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:05:12 crc kubenswrapper[4813]: I0129 17:05:12.660248 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerID="ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1" exitCode=0 Jan 29 17:05:12 crc kubenswrapper[4813]: I0129 17:05:12.660319 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerDied","Data":"ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1"} Jan 29 17:05:13 crc kubenswrapper[4813]: I0129 17:05:13.670998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerStarted","Data":"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1"} Jan 29 17:05:13 crc kubenswrapper[4813]: I0129 17:05:13.698709 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ssmnk" podStartSLOduration=2.904139916 podStartE2EDuration="46.698688826s" podCreationTimestamp="2026-01-29 17:04:27 +0000 UTC" firstStartedPulling="2026-01-29 17:04:29.300526572 +0000 UTC m=+2121.787729788" lastFinishedPulling="2026-01-29 17:05:13.095075472 +0000 UTC m=+2165.582278698" observedRunningTime="2026-01-29 17:05:13.689375692 +0000 UTC m=+2166.176578908" watchObservedRunningTime="2026-01-29 17:05:13.698688826 +0000 UTC m=+2166.185892042" Jan 29 17:05:15 crc kubenswrapper[4813]: I0129 17:05:15.688583 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerStarted","Data":"84896b434c6c1c17df63e1f932fec30168843e2458046455117c7bf9441bfa8e"} Jan 29 17:05:15 crc kubenswrapper[4813]: I0129 17:05:15.689308 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:05:15 crc kubenswrapper[4813]: I0129 17:05:15.714332 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3615658919999998 podStartE2EDuration="6m19.714315073s" podCreationTimestamp="2026-01-29 16:58:56 +0000 UTC" firstStartedPulling="2026-01-29 16:58:57.166970901 +0000 UTC m=+1789.654174117" lastFinishedPulling="2026-01-29 17:05:14.519720082 +0000 UTC m=+2167.006923298" observedRunningTime="2026-01-29 17:05:15.713946762 +0000 UTC m=+2168.201150018" watchObservedRunningTime="2026-01-29 17:05:15.714315073 +0000 UTC m=+2168.201518289" Jan 29 17:05:17 crc kubenswrapper[4813]: I0129 17:05:17.892867 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:17 crc kubenswrapper[4813]: I0129 17:05:17.893291 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:17 crc kubenswrapper[4813]: I0129 17:05:17.937793 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:18 crc kubenswrapper[4813]: I0129 17:05:18.783054 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:18 crc kubenswrapper[4813]: I0129 17:05:18.833576 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:05:20 crc kubenswrapper[4813]: I0129 17:05:20.727504 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ssmnk" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="registry-server" containerID="cri-o://d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1" gracePeriod=2 Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.172235 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.320232 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities\") pod \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.320645 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content\") pod \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.320676 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzjsk\" (UniqueName: \"kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk\") pod \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\" (UID: \"6f57c049-b9d0-4b2b-9270-ba2319ad0dca\") " Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.321171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities" (OuterVolumeSpecName: "utilities") pod "6f57c049-b9d0-4b2b-9270-ba2319ad0dca" (UID: "6f57c049-b9d0-4b2b-9270-ba2319ad0dca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.322325 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.327141 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk" (OuterVolumeSpecName: "kube-api-access-nzjsk") pod "6f57c049-b9d0-4b2b-9270-ba2319ad0dca" (UID: "6f57c049-b9d0-4b2b-9270-ba2319ad0dca"). InnerVolumeSpecName "kube-api-access-nzjsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.374614 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f57c049-b9d0-4b2b-9270-ba2319ad0dca" (UID: "6f57c049-b9d0-4b2b-9270-ba2319ad0dca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.423286 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.423321 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzjsk\" (UniqueName: \"kubernetes.io/projected/6f57c049-b9d0-4b2b-9270-ba2319ad0dca-kube-api-access-nzjsk\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.739347 4813 generic.go:334] "Generic (PLEG): container finished" podID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerID="d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1" exitCode=0 Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.739394 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerDied","Data":"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1"} Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.739409 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ssmnk" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.739434 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ssmnk" event={"ID":"6f57c049-b9d0-4b2b-9270-ba2319ad0dca","Type":"ContainerDied","Data":"7b5bd0945c61fbcedc8dff89fcdc3af4907e1593fe1a71e8100ad756254f48af"} Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.739468 4813 scope.go:117] "RemoveContainer" containerID="d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.773091 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.774814 4813 scope.go:117] "RemoveContainer" containerID="ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.799069 4813 scope.go:117] "RemoveContainer" containerID="e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.800527 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ssmnk"] Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.839409 4813 scope.go:117] "RemoveContainer" containerID="d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1" Jan 29 17:05:21 crc kubenswrapper[4813]: E0129 17:05:21.840520 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1\": container with ID starting with d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1 not found: ID does not exist" containerID="d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.840603 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1"} err="failed to get container status \"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1\": rpc error: code = NotFound desc = could not find container \"d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1\": container with ID starting with d445785da849eafd519e334f18648252389c3fca0497123bdad20f7a0e0ec4a1 not found: ID does not exist" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.840634 4813 scope.go:117] "RemoveContainer" containerID="ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1" Jan 29 17:05:21 crc kubenswrapper[4813]: E0129 17:05:21.842712 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1\": container with ID starting with ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1 not found: ID does not exist" containerID="ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.842738 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1"} err="failed to get container status \"ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1\": rpc error: code = NotFound desc = could not find container \"ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1\": container with ID starting with ae8463fdd60b6d7e8472fd4614a9a59eb8611b942e27113112353438fe9b6df1 not found: ID does not exist" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.842754 4813 scope.go:117] "RemoveContainer" containerID="e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce" Jan 29 17:05:21 crc kubenswrapper[4813]: E0129 17:05:21.843261 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce\": container with ID starting with e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce not found: ID does not exist" containerID="e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce" Jan 29 17:05:21 crc kubenswrapper[4813]: I0129 17:05:21.843326 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce"} err="failed to get container status \"e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce\": rpc error: code = NotFound desc = could not find container \"e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce\": container with ID starting with e70260302244423a0402cbf8375280f4facf6a325d9c2203cb0db0f8476941ce not found: ID does not exist" Jan 29 17:05:22 crc kubenswrapper[4813]: I0129 17:05:22.249404 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" path="/var/lib/kubelet/pods/6f57c049-b9d0-4b2b-9270-ba2319ad0dca/volumes" Jan 29 17:05:26 crc kubenswrapper[4813]: I0129 17:05:26.671041 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.152383 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.152935 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" containerName="kube-state-metrics" containerID="cri-o://bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f" gracePeriod=30 Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.240469 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.240542 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.671274 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.816292 4813 generic.go:334] "Generic (PLEG): container finished" podID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" containerID="bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f" exitCode=2 Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.816360 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30","Type":"ContainerDied","Data":"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f"} Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.816407 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30","Type":"ContainerDied","Data":"39919ff14a16305b0948dc639a1edb5e8c15163b3a61dd9155f0671e69311fd5"} Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.816414 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.816432 4813 scope.go:117] "RemoveContainer" containerID="bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.818001 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fgz\" (UniqueName: \"kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz\") pod \"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30\" (UID: \"a8b1a98d-6274-4af5-b861-b0c9d9dc0d30\") " Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.823850 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz" (OuterVolumeSpecName: "kube-api-access-q2fgz") pod "a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" (UID: "a8b1a98d-6274-4af5-b861-b0c9d9dc0d30"). InnerVolumeSpecName "kube-api-access-q2fgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.848872 4813 scope.go:117] "RemoveContainer" containerID="bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f" Jan 29 17:05:30 crc kubenswrapper[4813]: E0129 17:05:30.850629 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f\": container with ID starting with bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f not found: ID does not exist" containerID="bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.850695 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f"} err="failed to get container status \"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f\": rpc error: code = NotFound desc = could not find container \"bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f\": container with ID starting with bf306124ba7bab5431713f0789561bda36a5c6a496a05f5e77ad0efc34807d8f not found: ID does not exist" Jan 29 17:05:30 crc kubenswrapper[4813]: I0129 17:05:30.921775 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fgz\" (UniqueName: \"kubernetes.io/projected/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30-kube-api-access-q2fgz\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.154808 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.163765 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.182751 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: E0129 17:05:31.184045 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="extract-content" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184062 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="extract-content" Jan 29 17:05:31 crc kubenswrapper[4813]: E0129 17:05:31.184076 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" containerName="kube-state-metrics" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184084 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" containerName="kube-state-metrics" Jan 29 17:05:31 crc kubenswrapper[4813]: E0129 17:05:31.184096 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="extract-utilities" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184104 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="extract-utilities" Jan 29 17:05:31 crc kubenswrapper[4813]: E0129 17:05:31.184154 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="registry-server" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184164 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="registry-server" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184352 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" containerName="kube-state-metrics" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.184375 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f57c049-b9d0-4b2b-9270-ba2319ad0dca" containerName="registry-server" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.185001 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.187687 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.189193 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.194509 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.228185 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj7r\" (UniqueName: \"kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.228256 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.228379 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.228484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.336757 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.336897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj7r\" (UniqueName: \"kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.336975 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.337048 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.341842 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.355179 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj7r\" (UniqueName: \"kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.355270 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.363924 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.510530 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.963805 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.973706 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.974033 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-central-agent" containerID="cri-o://5f5a0255b32de3bfbc3839d6a30cc333786d3d305b59b44dae2b1fac6564c366" gracePeriod=30 Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.974272 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="sg-core" containerID="cri-o://dafb4e0a912def79f44d01aafe94a7de52de4565116ea94454d5ca790eaabad2" gracePeriod=30 Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.974367 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-notification-agent" containerID="cri-o://31326c9192bca4879c9d47c42389f6247ae3d703f51209d9887acf47c1a9a945" gracePeriod=30 Jan 29 17:05:31 crc kubenswrapper[4813]: I0129 17:05:31.974311 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="proxy-httpd" containerID="cri-o://84896b434c6c1c17df63e1f932fec30168843e2458046455117c7bf9441bfa8e" gracePeriod=30 Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.263211 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b1a98d-6274-4af5-b861-b0c9d9dc0d30" path="/var/lib/kubelet/pods/a8b1a98d-6274-4af5-b861-b0c9d9dc0d30/volumes" Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.833965 4813 generic.go:334] "Generic (PLEG): container finished" podID="23623af2-d480-4e00-ae8d-51da73cee712" containerID="84896b434c6c1c17df63e1f932fec30168843e2458046455117c7bf9441bfa8e" exitCode=0 Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.834298 4813 generic.go:334] "Generic (PLEG): container finished" podID="23623af2-d480-4e00-ae8d-51da73cee712" containerID="dafb4e0a912def79f44d01aafe94a7de52de4565116ea94454d5ca790eaabad2" exitCode=2 Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.834309 4813 generic.go:334] "Generic (PLEG): container finished" podID="23623af2-d480-4e00-ae8d-51da73cee712" containerID="5f5a0255b32de3bfbc3839d6a30cc333786d3d305b59b44dae2b1fac6564c366" exitCode=0 Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.834048 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerDied","Data":"84896b434c6c1c17df63e1f932fec30168843e2458046455117c7bf9441bfa8e"} Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.834346 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerDied","Data":"dafb4e0a912def79f44d01aafe94a7de52de4565116ea94454d5ca790eaabad2"} Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.834366 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerDied","Data":"5f5a0255b32de3bfbc3839d6a30cc333786d3d305b59b44dae2b1fac6564c366"} Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.835894 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6471b922-d5d2-4a43-ab3a-fbe69b43beb9","Type":"ContainerStarted","Data":"02d822763cfec808d3e70dcfe053468ed8dee511781bd6c54dfa3346b4eb9935"} Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.835932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6471b922-d5d2-4a43-ab3a-fbe69b43beb9","Type":"ContainerStarted","Data":"8fc0774379309064b6b334f290fda2d6ccfe576f3140132a8b2603e23940684d"} Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.836092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 17:05:32 crc kubenswrapper[4813]: I0129 17:05:32.857526 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.468673565 podStartE2EDuration="1.857502939s" podCreationTimestamp="2026-01-29 17:05:31 +0000 UTC" firstStartedPulling="2026-01-29 17:05:31.955100249 +0000 UTC m=+2184.442303465" lastFinishedPulling="2026-01-29 17:05:32.343929613 +0000 UTC m=+2184.831132839" observedRunningTime="2026-01-29 17:05:32.84871826 +0000 UTC m=+2185.335921476" watchObservedRunningTime="2026-01-29 17:05:32.857502939 +0000 UTC m=+2185.344706155" Jan 29 17:05:33 crc kubenswrapper[4813]: I0129 17:05:33.846846 4813 generic.go:334] "Generic (PLEG): container finished" podID="23623af2-d480-4e00-ae8d-51da73cee712" containerID="31326c9192bca4879c9d47c42389f6247ae3d703f51209d9887acf47c1a9a945" exitCode=0 Jan 29 17:05:33 crc kubenswrapper[4813]: I0129 17:05:33.846931 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerDied","Data":"31326c9192bca4879c9d47c42389f6247ae3d703f51209d9887acf47c1a9a945"} Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.126480 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204713 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204863 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204953 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.204982 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzwwd\" (UniqueName: \"kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.205053 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml\") pod \"23623af2-d480-4e00-ae8d-51da73cee712\" (UID: \"23623af2-d480-4e00-ae8d-51da73cee712\") " Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.205660 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.206029 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.213328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd" (OuterVolumeSpecName: "kube-api-access-qzwwd") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "kube-api-access-qzwwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.213485 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts" (OuterVolumeSpecName: "scripts") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.235512 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.287645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307811 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307840 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307851 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/23623af2-d480-4e00-ae8d-51da73cee712-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307860 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzwwd\" (UniqueName: \"kubernetes.io/projected/23623af2-d480-4e00-ae8d-51da73cee712-kube-api-access-qzwwd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307869 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.307877 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.311346 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data" (OuterVolumeSpecName: "config-data") pod "23623af2-d480-4e00-ae8d-51da73cee712" (UID: "23623af2-d480-4e00-ae8d-51da73cee712"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.409787 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23623af2-d480-4e00-ae8d-51da73cee712-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.860459 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"23623af2-d480-4e00-ae8d-51da73cee712","Type":"ContainerDied","Data":"5966bf15a1d90ccb9ff02b8235777415251e4a5bca5a2c131b755a8dc4041d71"} Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.860789 4813 scope.go:117] "RemoveContainer" containerID="84896b434c6c1c17df63e1f932fec30168843e2458046455117c7bf9441bfa8e" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.860513 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.893080 4813 scope.go:117] "RemoveContainer" containerID="dafb4e0a912def79f44d01aafe94a7de52de4565116ea94454d5ca790eaabad2" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.903958 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.914390 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.923721 4813 scope.go:117] "RemoveContainer" containerID="31326c9192bca4879c9d47c42389f6247ae3d703f51209d9887acf47c1a9a945" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.931671 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:34 crc kubenswrapper[4813]: E0129 17:05:34.932047 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-central-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932064 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-central-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: E0129 17:05:34.932122 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-notification-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932129 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-notification-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: E0129 17:05:34.932146 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="sg-core" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932153 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="sg-core" Jan 29 17:05:34 crc kubenswrapper[4813]: E0129 17:05:34.932172 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="proxy-httpd" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932182 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="proxy-httpd" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932545 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="proxy-httpd" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932560 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-notification-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932573 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="sg-core" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.932588 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="23623af2-d480-4e00-ae8d-51da73cee712" containerName="ceilometer-central-agent" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.934613 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.937278 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.937634 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.937768 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.962389 4813 scope.go:117] "RemoveContainer" containerID="5f5a0255b32de3bfbc3839d6a30cc333786d3d305b59b44dae2b1fac6564c366" Jan 29 17:05:34 crc kubenswrapper[4813]: I0129 17:05:34.977948 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020177 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020317 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020346 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020407 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020450 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020492 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020519 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.020536 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwwt6\" (UniqueName: \"kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122485 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122817 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122841 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122890 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.122938 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.123084 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.123483 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.123531 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.123598 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwwt6\" (UniqueName: \"kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.128973 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.129028 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.129291 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.129552 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.129709 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.151139 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwwt6\" (UniqueName: \"kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6\") pod \"ceilometer-0\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.273254 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.752259 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:35 crc kubenswrapper[4813]: W0129 17:05:35.757496 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37a75d1_c71b_4af5_9fe8_83b53b60b86e.slice/crio-6f5a7d6852568464b0d76f0a5aed82a725db41cb37d996a42cff1211c66e26ed WatchSource:0}: Error finding container 6f5a7d6852568464b0d76f0a5aed82a725db41cb37d996a42cff1211c66e26ed: Status 404 returned error can't find the container with id 6f5a7d6852568464b0d76f0a5aed82a725db41cb37d996a42cff1211c66e26ed Jan 29 17:05:35 crc kubenswrapper[4813]: I0129 17:05:35.870897 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerStarted","Data":"6f5a7d6852568464b0d76f0a5aed82a725db41cb37d996a42cff1211c66e26ed"} Jan 29 17:05:36 crc kubenswrapper[4813]: I0129 17:05:36.250053 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23623af2-d480-4e00-ae8d-51da73cee712" path="/var/lib/kubelet/pods/23623af2-d480-4e00-ae8d-51da73cee712/volumes" Jan 29 17:05:36 crc kubenswrapper[4813]: I0129 17:05:36.883368 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerStarted","Data":"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072"} Jan 29 17:05:37 crc kubenswrapper[4813]: I0129 17:05:37.901207 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerStarted","Data":"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d"} Jan 29 17:05:37 crc kubenswrapper[4813]: I0129 17:05:37.902827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerStarted","Data":"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098"} Jan 29 17:05:40 crc kubenswrapper[4813]: I0129 17:05:40.928313 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerStarted","Data":"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea"} Jan 29 17:05:40 crc kubenswrapper[4813]: I0129 17:05:40.928765 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 17:05:40 crc kubenswrapper[4813]: I0129 17:05:40.952251 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.01361172 podStartE2EDuration="6.952229812s" podCreationTimestamp="2026-01-29 17:05:34 +0000 UTC" firstStartedPulling="2026-01-29 17:05:35.759962887 +0000 UTC m=+2188.247166103" lastFinishedPulling="2026-01-29 17:05:39.698580979 +0000 UTC m=+2192.185784195" observedRunningTime="2026-01-29 17:05:40.946984203 +0000 UTC m=+2193.434187419" watchObservedRunningTime="2026-01-29 17:05:40.952229812 +0000 UTC m=+2193.439433028" Jan 29 17:05:41 crc kubenswrapper[4813]: I0129 17:05:41.524374 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.457025 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.466931 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.470481 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.492998 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.577750 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.578174 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="124e029a-6d27-4b49-830c-4be46fc186cc" containerName="openstackclient" containerID="cri-o://6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0" gracePeriod=2 Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.610486 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nhx\" (UniqueName: \"kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.610571 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.610618 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.650197 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:47 crc kubenswrapper[4813]: E0129 17:05:47.650772 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124e029a-6d27-4b49-830c-4be46fc186cc" containerName="openstackclient" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.650796 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="124e029a-6d27-4b49-830c-4be46fc186cc" containerName="openstackclient" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.650994 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="124e029a-6d27-4b49-830c-4be46fc186cc" containerName="openstackclient" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.651827 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.659747 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.660064 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.718575 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.718720 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nhx\" (UniqueName: \"kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.728829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.796030 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nhx\" (UniqueName: \"kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx\") pod \"root-account-create-update-9wz8r\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.867287 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.869345 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nb4\" (UniqueName: \"kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.869482 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.878402 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.936443 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.959605 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.960454 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.971827 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nb4\" (UniqueName: \"kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.971897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:47 crc kubenswrapper[4813]: I0129 17:05:47.972820 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.021810 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.043985 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.045849 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.048118 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.058722 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nb4\" (UniqueName: \"kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4\") pod \"placement-9248-account-create-update-t2pxw\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.075531 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.076006 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="ovn-northd" containerID="cri-o://9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" gracePeriod=30 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.076260 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="openstack-network-exporter" containerID="cri-o://fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64" gracePeriod=30 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.086444 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.087428 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.087900 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwhf\" (UniqueName: \"kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.095273 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6699-account-create-update-gsbdz"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.107711 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mwds2"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.117237 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6699-account-create-update-gsbdz"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.140862 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mwds2"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.190707 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.196619 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.196665 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp855\" (UniqueName: \"kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.196715 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwhf\" (UniqueName: \"kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: E0129 17:05:48.196912 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:48 crc kubenswrapper[4813]: E0129 17:05:48.196994 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data podName:bda951f8-8354-4ca3-be9e-f92f6fea40cc nodeName:}" failed. No retries permitted until 2026-01-29 17:05:48.696962497 +0000 UTC m=+2201.184165713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data") pod "rabbitmq-cell1-server-0" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.191947 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.280578 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.294213 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwhf\" (UniqueName: \"kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf\") pod \"neutron-cb83-account-create-update-8jlrr\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.299821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.299897 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp855\" (UniqueName: \"kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.311043 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.399311 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6" path="/var/lib/kubelet/pods/944ccdb5-b116-4fa7-bd5b-f88e9c6cd0b6/volumes" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.422888 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp855\" (UniqueName: \"kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855\") pod \"glance-cf77-account-create-update-t6fp5\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.427672 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d484b71c-e076-43d8-ac63-afe47f877f98" path="/var/lib/kubelet/pods/d484b71c-e076-43d8-ac63-afe47f877f98/volumes" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.429231 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-043c-account-create-update-m4hzv"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.429313 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.429339 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-043c-account-create-update-m4hzv"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.431133 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="openstack-network-exporter" containerID="cri-o://a98d4fc4fffd6c7b6ad588f19d86d27cf2a531b07982798f2fbbced264241b68" gracePeriod=300 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.462071 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.488904 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.496637 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-72hcg"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.544367 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-72hcg"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.567626 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-6l7qs"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.584870 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="ovsdbserver-sb" containerID="cri-o://8e8b76ed0fa013c6fec75384e05efa7f1f3c80354a95f50dea2ec3a295bf5a92" gracePeriod=300 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.589160 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-6l7qs"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.605430 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-z9f8j"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.647166 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-z9f8j"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.663330 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.663781 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-qw2k9" podUID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" containerName="openstack-network-exporter" containerID="cri-o://9ed8ecb763d33646b3e8cc91f896a3b2de2c4f6990d7c4d2843989603afddfed" gracePeriod=30 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.676276 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-j8nqz"] Jan 29 17:05:48 crc kubenswrapper[4813]: E0129 17:05:48.707507 4813 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.47:44312->38.102.83.47:38751: write tcp 38.102.83.47:44312->38.102.83.47:38751: write: broken pipe Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.709233 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-j8nqz"] Jan 29 17:05:48 crc kubenswrapper[4813]: E0129 17:05:48.719267 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:48 crc kubenswrapper[4813]: E0129 17:05:48.719332 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data podName:bda951f8-8354-4ca3-be9e-f92f6fea40cc nodeName:}" failed. No retries permitted until 2026-01-29 17:05:49.719317211 +0000 UTC m=+2202.206520427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data") pod "rabbitmq-cell1-server-0" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.771089 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.771452 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="openstack-network-exporter" containerID="cri-o://93a587b04c51abd55fec3ccad7f3b0bfc20db316641f2585e79d269b2d6979d1" gracePeriod=300 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.819996 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.861939 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.887045 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.887393 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="cinder-scheduler" containerID="cri-o://96ce0e98214ecc3dca853a25bbf06658a39336d51fa186c97aff7d03e5d42077" gracePeriod=30 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.887560 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="probe" containerID="cri-o://b2561ed4f6f62f8ecb439c1fcdc29a4178bfc26d6202bdfabc617cd3c474d99c" gracePeriod=30 Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.949300 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4tc6w"] Jan 29 17:05:48 crc kubenswrapper[4813]: I0129 17:05:48.974505 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4tc6w"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:48.997170 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.006124 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="ovsdbserver-nb" containerID="cri-o://327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" gracePeriod=300 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.058783 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.059814 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7dcb464dcd-dklmw" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-log" containerID="cri-o://e28682307f39725fe7f765dc526b169a558cafd630133cfcaa46e4c31f5198f9" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.060653 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7dcb464dcd-dklmw" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-api" containerID="cri-o://b2f593231b9c6777313a8a59f22732777b3c29a364623b14415c2eec818f17c5" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.065532 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.065616 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data podName:6463fe6f-cd6d-4078-8fa2-0d167de480df nodeName:}" failed. No retries permitted until 2026-01-29 17:05:49.565591402 +0000 UTC m=+2202.052794618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data") pod "rabbitmq-server-0" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df") : configmap "rabbitmq-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.117052 4813 log.go:32] "ExecSync cmd from runtime service failed" err=< Jan 29 17:05:49 crc kubenswrapper[4813]: rpc error: code = Unknown desc = command error: setns `mnt`: Bad file descriptor Jan 29 17:05:49 crc kubenswrapper[4813]: fail startup Jan 29 17:05:49 crc kubenswrapper[4813]: , stdout: , stderr: , exit code -1 Jan 29 17:05:49 crc kubenswrapper[4813]: > containerID="327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.118386 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4 is running failed: container process not found" containerID="327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.119080 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4 is running failed: container process not found" containerID="327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.119136 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4 is running failed: container process not found" probeType="Liveness" pod="openstack/ovsdbserver-nb-0" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="ovsdbserver-nb" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.124595 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.125000 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="dnsmasq-dns" containerID="cri-o://ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d" gracePeriod=10 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.236564 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.239183 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:49 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: if [ -n "" ]; then Jan 29 17:05:49 crc kubenswrapper[4813]: GRANT_DATABASE="" Jan 29 17:05:49 crc kubenswrapper[4813]: else Jan 29 17:05:49 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:49 crc kubenswrapper[4813]: fi Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:49 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:49 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:49 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:49 crc kubenswrapper[4813]: # support updates Jan 29 17:05:49 crc kubenswrapper[4813]: Jan 29 17:05:49 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.240878 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9wz8r" podUID="cbb62edf-bd31-473d-8833-664cb8007f92" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.244548 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.244806 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api-log" containerID="cri-o://8a7ea31b28e2c6bfe834bf71fe3196b448efd62becd905ed803c99a217f36124" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.245210 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api" containerID="cri-o://33964ffaf6cb92f00377625e2792de4b5bdd034d751dddf7790ae69bc7e4ed47" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.247566 4813 generic.go:334] "Generic (PLEG): container finished" podID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerID="fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64" exitCode=2 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.251597 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerDied","Data":"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.260757 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.268458 4813 generic.go:334] "Generic (PLEG): container finished" podID="220240d2-4982-4884-80eb-09b077e332a1" containerID="f184d025a4e1906806b96ee93fd5b719824f5d60bcbcf8baa54ffe70d539920a" exitCode=2 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.268637 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h8ndz" event={"ID":"220240d2-4982-4884-80eb-09b077e332a1","Type":"ContainerDied","Data":"f184d025a4e1906806b96ee93fd5b719824f5d60bcbcf8baa54ffe70d539920a"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.269274 4813 scope.go:117] "RemoveContainer" containerID="f184d025a4e1906806b96ee93fd5b719824f5d60bcbcf8baa54ffe70d539920a" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.269651 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.270145 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" containerID="cri-o://9f57c05f59032c6a2d1a7cccf2fb5f3a52e7578aa8e9376e28838edd405492ff" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.270243 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" containerID="cri-o://e05393a1d8d62733f2c992a97a9c85905297b74ae0d29c1c6ef323e9a3b39679" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.316258 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.407257 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-fpmz6"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.439459 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_57a6aaa4-f80a-49fa-8236-967825494243/ovsdbserver-nb/0.log" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.439677 4813 generic.go:334] "Generic (PLEG): container finished" podID="57a6aaa4-f80a-49fa-8236-967825494243" containerID="93a587b04c51abd55fec3ccad7f3b0bfc20db316641f2585e79d269b2d6979d1" exitCode=2 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.439796 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"57a6aaa4-f80a-49fa-8236-967825494243","Type":"ContainerDied","Data":"93a587b04c51abd55fec3ccad7f3b0bfc20db316641f2585e79d269b2d6979d1"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.465308 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-fpmz6"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.497239 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.499670 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-api" containerID="cri-o://34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.504072 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-log" containerID="cri-o://6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.533490 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.533835 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-886657b65-p552j" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-api" containerID="cri-o://0e1eabbbaa63c5764c44a582ebc5008e28c901ea880bd7fdcfd278bd736cbcb5" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.534321 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-886657b65-p552j" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-httpd" containerID="cri-o://5aa28b5649ace6006edcf7ca887bd1062f191ab08030423dc4e8ee65bc52dfff" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.544741 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b3b9c405-fbca-44fa-820a-1613a7df4c9c/ovsdbserver-sb/0.log" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.544808 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerID="a98d4fc4fffd6c7b6ad588f19d86d27cf2a531b07982798f2fbbced264241b68" exitCode=2 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.544831 4813 generic.go:334] "Generic (PLEG): container finished" podID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerID="8e8b76ed0fa013c6fec75384e05efa7f1f3c80354a95f50dea2ec3a295bf5a92" exitCode=143 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.544945 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerDied","Data":"a98d4fc4fffd6c7b6ad588f19d86d27cf2a531b07982798f2fbbced264241b68"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.544983 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerDied","Data":"8e8b76ed0fa013c6fec75384e05efa7f1f3c80354a95f50dea2ec3a295bf5a92"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.583871 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qw2k9_e994ed3b-ff92-4997-b056-0c3b37fcebcf/openstack-network-exporter/0.log" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.584212 4813 generic.go:334] "Generic (PLEG): container finished" podID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" containerID="9ed8ecb763d33646b3e8cc91f896a3b2de2c4f6990d7c4d2843989603afddfed" exitCode=2 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.584245 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2k9" event={"ID":"e994ed3b-ff92-4997-b056-0c3b37fcebcf","Type":"ContainerDied","Data":"9ed8ecb763d33646b3e8cc91f896a3b2de2c4f6990d7c4d2843989603afddfed"} Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.586701 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4sxkl"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.597184 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4sxkl"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.621837 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b3b9c405-fbca-44fa-820a-1613a7df4c9c/ovsdbserver-sb/0.log" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.621931 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.632019 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.632078 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data podName:6463fe6f-cd6d-4078-8fa2-0d167de480df nodeName:}" failed. No retries permitted until 2026-01-29 17:05:50.632063724 +0000 UTC m=+2203.119266940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data") pod "rabbitmq-server-0" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df") : configmap "rabbitmq-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.656050 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.656317 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-log" containerID="cri-o://92a3054a95c28ed9a7e7347b5d0b1b2fbb5102f2edf17c6d56d65e646701fc21" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.656467 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-httpd" containerID="cri-o://8abb7a4ee4a3b9fc231ace5a03ef91398ee318f728d8debd57c6d16f0aae7ba3" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.704179 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.704516 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b9868448c-29tkg" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker-log" containerID="cri-o://085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.704948 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b9868448c-29tkg" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker" containerID="cri-o://ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.733975 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734149 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734375 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734410 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734479 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs7pb\" (UniqueName: \"kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.734702 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config\") pod \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\" (UID: \"b3b9c405-fbca-44fa-820a-1613a7df4c9c\") " Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.735630 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: E0129 17:05:49.735691 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data podName:bda951f8-8354-4ca3-be9e-f92f6fea40cc nodeName:}" failed. No retries permitted until 2026-01-29 17:05:51.735672697 +0000 UTC m=+2204.222875913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data") pod "rabbitmq-cell1-server-0" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.735731 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts" (OuterVolumeSpecName: "scripts") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.737318 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.739179 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config" (OuterVolumeSpecName: "config") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.747661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.775476 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.775748 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener-log" containerID="cri-o://c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.776246 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener" containerID="cri-o://d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.780940 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb" (OuterVolumeSpecName: "kube-api-access-qs7pb") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "kube-api-access-qs7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.783296 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="galera" containerID="cri-o://d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.793400 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.795330 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65d9b85856-5rdmb" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api-log" containerID="cri-o://a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.796137 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-65d9b85856-5rdmb" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api" containerID="cri-o://ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.802072 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.802829 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-server" containerID="cri-o://357fa21995192338eea4cf618d4dd037f9d9c53370af078711f773adf08dcd27" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.802966 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-expirer" containerID="cri-o://0912400cbaf3d64dac1bbb048041811be489912643b2b55d8cdd00b6904d5618" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803001 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-replicator" containerID="cri-o://85396d22d1bacb74de0dc15e3d6568fd0dd75a24110e76865a22e9698f5e03d7" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803063 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-updater" containerID="cri-o://974ef80b14499b0e09dc5f020a2ac326c98830e1859b4884bb3296cb6d847ee5" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803032 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="swift-recon-cron" containerID="cri-o://3e0e0ef163807a59b2a6a361cb8f9f50111fa4482f203c2cb3078cc29529bdee" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803045 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="rsync" containerID="cri-o://0cf8a700e82655607fac9d5a91b1fe7324cf5f6a7f959b303b9cc0b73826bb23" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.802964 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-server" containerID="cri-o://18d667fd3f825a0180d5ee212d62dc003f4d18f1f616c39664709f8fd3e22de4" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803192 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-updater" containerID="cri-o://57a4c1b42a22a49c5352631c53b0536257db9090a489fe6d5ac1fae934fe7b10" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803252 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-auditor" containerID="cri-o://8300f3dbc427b40576635dc7f78e11d14f9f9f09cfe4720072bd30ea18c40a96" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803291 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-replicator" containerID="cri-o://e9e8a5f597dfe66901c775ed9f99ffb45cc88a323355d8c8e377c086ac43eb70" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803358 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-auditor" containerID="cri-o://632060180af065e41041ad9ffff2395b8df04b6c43e12683432988737705e770" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803177 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-server" containerID="cri-o://6b315628dc01734b17c3bfcb292a1a9b22d94c5ec378c68879f38fea37cae951" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803423 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-reaper" containerID="cri-o://34a647053be6be5646ecc99401b474db714d929ce717508cb1f765ce77b6559c" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803363 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-auditor" containerID="cri-o://2cc8dee52a522946f284add3268d6d09db41b3c578932c60a72ffe29a472f072" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.803494 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-replicator" containerID="cri-o://c9a12f754af1b256a78355b7b3a3e4f909d6a20d352cb12126f67304e678fc2f" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.820528 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.820834 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-log" containerID="cri-o://a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.821409 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-httpd" containerID="cri-o://c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.832080 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.839660 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.839711 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs7pb\" (UniqueName: \"kubernetes.io/projected/b3b9c405-fbca-44fa-820a-1613a7df4c9c-kube-api-access-qs7pb\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.839751 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.839766 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3b9c405-fbca-44fa-820a-1613a7df4c9c-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.839779 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.848059 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.848399 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ade16a11-d1df-4706-9439-d0ef93069a66" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae" gracePeriod=30 Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.935446 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f5qpr"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.950930 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f5qpr"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.961798 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.961957 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.976217 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wksqv"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.983561 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.983613 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:49 crc kubenswrapper[4813]: I0129 17:05:49.998563 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wksqv"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.007989 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.018414 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5b8c-account-create-update-wq4nd"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.028518 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5b8c-account-create-update-wq4nd"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.035388 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.047058 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.047316 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-58968d868f-mvfqv" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-httpd" containerID="cri-o://2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb" gracePeriod=30 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.047770 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-58968d868f-mvfqv" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-server" containerID="cri-o://f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a" gracePeriod=30 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.055611 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cnf4h"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.057543 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.057565 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.086854 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cnf4h"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.097265 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.097500 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" containerID="cri-o://631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" gracePeriod=30 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.131773 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" containerID="cri-o://2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" gracePeriod=29 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.173431 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7g6dr"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.189837 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" containerID="cri-o://4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269" gracePeriod=604800 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.196237 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.196585 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" gracePeriod=30 Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.199862 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.200222 4813 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 17:05:50 crc kubenswrapper[4813]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 17:05:50 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNBridge=br-int Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Jan 29 17:05:50 crc kubenswrapper[4813]: ++ PhysicalNetworks= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNHostName= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 17:05:50 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 17:05:50 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Jan 29 17:05:50 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 17:05:50 crc kubenswrapper[4813]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-xqdpz" message=< Jan 29 17:05:50 crc kubenswrapper[4813]: Exiting ovsdb-server (5) [ OK ] Jan 29 17:05:50 crc kubenswrapper[4813]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 17:05:50 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNBridge=br-int Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Jan 29 17:05:50 crc kubenswrapper[4813]: ++ PhysicalNetworks= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNHostName= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 17:05:50 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 17:05:50 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Jan 29 17:05:50 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 17:05:50 crc kubenswrapper[4813]: > Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.200257 4813 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 17:05:50 crc kubenswrapper[4813]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 17:05:50 crc kubenswrapper[4813]: + source /usr/local/bin/container-scripts/functions Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNBridge=br-int Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNRemote=tcp:localhost:6642 Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNEncapType=geneve Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNAvailabilityZones= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ EnableChassisAsGateway=true Jan 29 17:05:50 crc kubenswrapper[4813]: ++ PhysicalNetworks= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ OVNHostName= Jan 29 17:05:50 crc kubenswrapper[4813]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 17:05:50 crc kubenswrapper[4813]: ++ ovs_dir=/var/lib/openvswitch Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 17:05:50 crc kubenswrapper[4813]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 17:05:50 crc kubenswrapper[4813]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + sleep 0.5 Jan 29 17:05:50 crc kubenswrapper[4813]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 17:05:50 crc kubenswrapper[4813]: + cleanup_ovsdb_server_semaphore Jan 29 17:05:50 crc kubenswrapper[4813]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 17:05:50 crc kubenswrapper[4813]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 17:05:50 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" containerID="cri-o://77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.200311 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" containerID="cri-o://77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" gracePeriod=29 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.209414 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.228710 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.232589 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.232657 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.256774 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b3b9c405-fbca-44fa-820a-1613a7df4c9c" (UID: "b3b9c405-fbca-44fa-820a-1613a7df4c9c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.307859 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.308343 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.310663 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.311446 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.311500 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.314589 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.315105 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3b9c405-fbca-44fa-820a-1613a7df4c9c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.315455 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132e3c9c-5650-431b-8778-01be07a47038" path="/var/lib/kubelet/pods/132e3c9c-5650-431b-8778-01be07a47038/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.324358 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.326861 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qw2k9_e994ed3b-ff92-4997-b056-0c3b37fcebcf/openstack-network-exporter/0.log" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.326930 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.330359 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.330482 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.337864 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf146e2-d6a8-4c27-a3f5-ec80641f6017" path="/var/lib/kubelet/pods/1bf146e2-d6a8-4c27-a3f5-ec80641f6017/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.339634 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfb8782-48f0-49b0-a2f8-378b60f304c7" path="/var/lib/kubelet/pods/1cfb8782-48f0-49b0-a2f8-378b60f304c7/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.340324 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef5112b-ed61-4433-9339-a6bd16e11462" path="/var/lib/kubelet/pods/1ef5112b-ed61-4433-9339-a6bd16e11462/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.341088 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3acd2fcf-e5d3-43bc-b216-225edbc7114a" path="/var/lib/kubelet/pods/3acd2fcf-e5d3-43bc-b216-225edbc7114a/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.342332 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a433a1-cd64-432a-ac5a-1d8367a3a723" path="/var/lib/kubelet/pods/40a433a1-cd64-432a-ac5a-1d8367a3a723/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.342925 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1abced-dba3-4096-b7b5-f9f17fe32d90" path="/var/lib/kubelet/pods/4d1abced-dba3-4096-b7b5-f9f17fe32d90/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.344510 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdd976a-96f2-4c75-bab4-b557d5c6c025" path="/var/lib/kubelet/pods/5fdd976a-96f2-4c75-bab4-b557d5c6c025/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.345152 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ee298b-a2bb-4f72-874c-6a0a27a56d9a" path="/var/lib/kubelet/pods/81ee298b-a2bb-4f72-874c-6a0a27a56d9a/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.346942 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d3644d-4332-4c4e-a354-11aa4588e143" path="/var/lib/kubelet/pods/a9d3644d-4332-4c4e-a354-11aa4588e143/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.347695 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.352035 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3dcd69-3478-4b64-86b4-9d5b22b803c8" path="/var/lib/kubelet/pods/cb3dcd69-3478-4b64-86b4-9d5b22b803c8/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.354671 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7c43bd-490e-4742-8a72-ed0687faf4dd" path="/var/lib/kubelet/pods/db7c43bd-490e-4742-8a72-ed0687faf4dd/volumes" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.355418 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7g6dr"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.393546 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.396453 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.410756 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.411032 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9bd38907-4473-487b-8f79-85baaca96f00" containerName="nova-scheduler-scheduler" containerID="cri-o://4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" gracePeriod=30 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.420682 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.420983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.421081 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hdp\" (UniqueName: \"kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.421103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.421185 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.421215 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs\") pod \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\" (UID: \"e994ed3b-ff92-4997-b056-0c3b37fcebcf\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.422239 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.432046 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.435901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config" (OuterVolumeSpecName: "config") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.440328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp" (OuterVolumeSpecName: "kube-api-access-f7hdp") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "kube-api-access-f7hdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.456334 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.457076 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:50 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: if [ -n "placement" ]; then Jan 29 17:05:50 crc kubenswrapper[4813]: GRANT_DATABASE="placement" Jan 29 17:05:50 crc kubenswrapper[4813]: else Jan 29 17:05:50 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:50 crc kubenswrapper[4813]: fi Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:50 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:50 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:50 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:50 crc kubenswrapper[4813]: # support updates Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.459264 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-9248-account-create-update-t2pxw" podUID="f087a01a-3ee6-4655-9a03-a48fd4f903bb" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.468483 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.496327 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="rabbitmq" containerID="cri-o://a6111d6afee3055cda0c53ca25e0f552bbaa1c12f35b905627d640bdc35dfedb" gracePeriod=604800 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525499 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525598 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525697 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525727 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525746 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret\") pod \"124e029a-6d27-4b49-830c-4be46fc186cc\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config\") pod \"124e029a-6d27-4b49-830c-4be46fc186cc\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b99w\" (UniqueName: \"kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w\") pod \"124e029a-6d27-4b49-830c-4be46fc186cc\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525884 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525900 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2cb\" (UniqueName: \"kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb\") pod \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\" (UID: \"1ce996df-c4a1-431e-bae6-d16dfe1491f0\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.525944 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle\") pod \"124e029a-6d27-4b49-830c-4be46fc186cc\" (UID: \"124e029a-6d27-4b49-830c-4be46fc186cc\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.526356 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hdp\" (UniqueName: \"kubernetes.io/projected/e994ed3b-ff92-4997-b056-0c3b37fcebcf-kube-api-access-f7hdp\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.526368 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.526378 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.526386 4813 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e994ed3b-ff92-4997-b056-0c3b37fcebcf-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.526394 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e994ed3b-ff92-4997-b056-0c3b37fcebcf-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.548419 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w" (OuterVolumeSpecName: "kube-api-access-2b99w") pod "124e029a-6d27-4b49-830c-4be46fc186cc" (UID: "124e029a-6d27-4b49-830c-4be46fc186cc"). InnerVolumeSpecName "kube-api-access-2b99w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.562732 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb" (OuterVolumeSpecName: "kube-api-access-nq2cb") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "kube-api-access-nq2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.607080 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e994ed3b-ff92-4997-b056-0c3b37fcebcf" (UID: "e994ed3b-ff92-4997-b056-0c3b37fcebcf"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.620500 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b3b9c405-fbca-44fa-820a-1613a7df4c9c/ovsdbserver-sb/0.log" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.620581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b3b9c405-fbca-44fa-820a-1613a7df4c9c","Type":"ContainerDied","Data":"fb2c14e56f8f52789c8f375257bf74152d33604924d36732bdc99cdd5d053f02"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.620622 4813 scope.go:117] "RemoveContainer" containerID="a98d4fc4fffd6c7b6ad588f19d86d27cf2a531b07982798f2fbbced264241b68" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.620798 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.627648 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b99w\" (UniqueName: \"kubernetes.io/projected/124e029a-6d27-4b49-830c-4be46fc186cc-kube-api-access-2b99w\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.627681 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e994ed3b-ff92-4997-b056-0c3b37fcebcf-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.627692 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq2cb\" (UniqueName: \"kubernetes.io/projected/1ce996df-c4a1-431e-bae6-d16dfe1491f0-kube-api-access-nq2cb\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.635606 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.639826 4813 generic.go:334] "Generic (PLEG): container finished" podID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerID="9f57c05f59032c6a2d1a7cccf2fb5f3a52e7578aa8e9376e28838edd405492ff" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.639906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerDied","Data":"9f57c05f59032c6a2d1a7cccf2fb5f3a52e7578aa8e9376e28838edd405492ff"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.640990 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_57a6aaa4-f80a-49fa-8236-967825494243/ovsdbserver-nb/0.log" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.641064 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.655916 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.675101 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.677826 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "124e029a-6d27-4b49-830c-4be46fc186cc" (UID: "124e029a-6d27-4b49-830c-4be46fc186cc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.680963 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.684567 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-h8ndz" event={"ID":"220240d2-4982-4884-80eb-09b077e332a1","Type":"ContainerStarted","Data":"fdc611dec248d81c91a4e49ea9f37d26d9e66affeaca96feafc2702bd5bdd2b8"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.689837 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_57a6aaa4-f80a-49fa-8236-967825494243/ovsdbserver-nb/0.log" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.689888 4813 generic.go:334] "Generic (PLEG): container finished" podID="57a6aaa4-f80a-49fa-8236-967825494243" containerID="327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.690013 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.690337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"57a6aaa4-f80a-49fa-8236-967825494243","Type":"ContainerDied","Data":"327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.705171 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.714242 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "124e029a-6d27-4b49-830c-4be46fc186cc" (UID: "124e029a-6d27-4b49-830c-4be46fc186cc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.715592 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "124e029a-6d27-4b49-830c-4be46fc186cc" (UID: "124e029a-6d27-4b49-830c-4be46fc186cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.716197 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerDied","Data":"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.716100 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerID="085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729512 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729678 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729723 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvdj\" (UniqueName: \"kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729759 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729840 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.729921 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config\") pod \"57a6aaa4-f80a-49fa-8236-967825494243\" (UID: \"57a6aaa4-f80a-49fa-8236-967825494243\") " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730487 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730507 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730517 4813 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/124e029a-6d27-4b49-830c-4be46fc186cc-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730545 4813 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730553 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/124e029a-6d27-4b49-830c-4be46fc186cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.730562 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.730638 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.730702 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data podName:6463fe6f-cd6d-4078-8fa2-0d167de480df nodeName:}" failed. No retries permitted until 2026-01-29 17:05:52.730668957 +0000 UTC m=+2205.217872173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data") pod "rabbitmq-server-0" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df") : configmap "rabbitmq-config-data" not found Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.731328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.733807 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts" (OuterVolumeSpecName: "scripts") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.734717 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.735035 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config" (OuterVolumeSpecName: "config") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.746781 4813 scope.go:117] "RemoveContainer" containerID="8e8b76ed0fa013c6fec75384e05efa7f1f3c80354a95f50dea2ec3a295bf5a92" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.749878 4813 generic.go:334] "Generic (PLEG): container finished" podID="124e029a-6d27-4b49-830c-4be46fc186cc" containerID="6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0" exitCode=137 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.750162 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.751202 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj" (OuterVolumeSpecName: "kube-api-access-dsvdj") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "kube-api-access-dsvdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.751833 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.774455 4813 generic.go:334] "Generic (PLEG): container finished" podID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerID="ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.774550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" event={"ID":"1ce996df-c4a1-431e-bae6-d16dfe1491f0","Type":"ContainerDied","Data":"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.774581 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" event={"ID":"1ce996df-c4a1-431e-bae6-d16dfe1491f0","Type":"ContainerDied","Data":"0a64f9aa98ed07358c1fd932b64c4dd654815ceb0b543ac7ece797e9e25a9fc3"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.774662 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-nvszj" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.783179 4813 generic.go:334] "Generic (PLEG): container finished" podID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerID="e28682307f39725fe7f765dc526b169a558cafd630133cfcaa46e4c31f5198f9" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.783244 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerDied","Data":"e28682307f39725fe7f765dc526b169a558cafd630133cfcaa46e4c31f5198f9"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.784321 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wz8r" event={"ID":"cbb62edf-bd31-473d-8833-664cb8007f92","Type":"ContainerStarted","Data":"dca3d3365652a788209de1983198b74648e4f0da80cb87b54402cc39b3158ff3"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.784734 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-9wz8r" secret="" err="secret \"galera-openstack-cell1-dockercfg-xgq58\" not found" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.792382 4813 generic.go:334] "Generic (PLEG): container finished" podID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerID="b2561ed4f6f62f8ecb439c1fcdc29a4178bfc26d6202bdfabc617cd3c474d99c" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.792452 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerDied","Data":"b2561ed4f6f62f8ecb439c1fcdc29a4178bfc26d6202bdfabc617cd3c474d99c"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.797634 4813 scope.go:117] "RemoveContainer" containerID="93a587b04c51abd55fec3ccad7f3b0bfc20db316641f2585e79d269b2d6979d1" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.798047 4813 generic.go:334] "Generic (PLEG): container finished" podID="296a369a-d653-4033-9a10-7077b62875f1" containerID="a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.798088 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerDied","Data":"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b"} Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.809255 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:50 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: if [ -n "" ]; then Jan 29 17:05:50 crc kubenswrapper[4813]: GRANT_DATABASE="" Jan 29 17:05:50 crc kubenswrapper[4813]: else Jan 29 17:05:50 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:50 crc kubenswrapper[4813]: fi Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:50 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:50 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:50 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:50 crc kubenswrapper[4813]: # support updates Jan 29 17:05:50 crc kubenswrapper[4813]: Jan 29 17:05:50 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.810774 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9wz8r" podUID="cbb62edf-bd31-473d-8833-664cb8007f92" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.815873 4813 generic.go:334] "Generic (PLEG): container finished" podID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.815936 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerDied","Data":"77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.820670 4813 generic.go:334] "Generic (PLEG): container finished" podID="dd942f21-0785-443e-ab04-27548ecd9207" containerID="a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.820778 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerDied","Data":"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.825048 4813 generic.go:334] "Generic (PLEG): container finished" podID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerID="8a7ea31b28e2c6bfe834bf71fe3196b448efd62becd905ed803c99a217f36124" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.825141 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerDied","Data":"8a7ea31b28e2c6bfe834bf71fe3196b448efd62becd905ed803c99a217f36124"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832002 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832030 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57a6aaa4-f80a-49fa-8236-967825494243-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832039 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57a6aaa4-f80a-49fa-8236-967825494243-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832059 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832069 4813 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.832078 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvdj\" (UniqueName: \"kubernetes.io/projected/57a6aaa4-f80a-49fa-8236-967825494243-kube-api-access-dsvdj\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.836770 4813 generic.go:334] "Generic (PLEG): container finished" podID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerID="c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.836873 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerDied","Data":"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.841594 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qw2k9_e994ed3b-ff92-4997-b056-0c3b37fcebcf/openstack-network-exporter/0.log" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.842201 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2k9" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.845301 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2k9" event={"ID":"e994ed3b-ff92-4997-b056-0c3b37fcebcf","Type":"ContainerDied","Data":"81d09bce7d548ab32871775a42d2aeb244c30940c5fdfe84195ccb8a5a512aba"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.852504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config" (OuterVolumeSpecName: "config") pod "1ce996df-c4a1-431e-bae6-d16dfe1491f0" (UID: "1ce996df-c4a1-431e-bae6-d16dfe1491f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.855338 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.864203 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerID="2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.864353 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerDied","Data":"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.882224 4813 generic.go:334] "Generic (PLEG): container finished" podID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerID="6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.882627 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerDied","Data":"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907636 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="0cf8a700e82655607fac9d5a91b1fe7324cf5f6a7f959b303b9cc0b73826bb23" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907819 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="0912400cbaf3d64dac1bbb048041811be489912643b2b55d8cdd00b6904d5618" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907835 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="974ef80b14499b0e09dc5f020a2ac326c98830e1859b4884bb3296cb6d847ee5" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907844 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="2cc8dee52a522946f284add3268d6d09db41b3c578932c60a72ffe29a472f072" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907855 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="85396d22d1bacb74de0dc15e3d6568fd0dd75a24110e76865a22e9698f5e03d7" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907865 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="18d667fd3f825a0180d5ee212d62dc003f4d18f1f616c39664709f8fd3e22de4" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907873 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="57a4c1b42a22a49c5352631c53b0536257db9090a489fe6d5ac1fae934fe7b10" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907896 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="8300f3dbc427b40576635dc7f78e11d14f9f9f09cfe4720072bd30ea18c40a96" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907908 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="e9e8a5f597dfe66901c775ed9f99ffb45cc88a323355d8c8e377c086ac43eb70" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907915 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="6b315628dc01734b17c3bfcb292a1a9b22d94c5ec378c68879f38fea37cae951" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907926 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="34a647053be6be5646ecc99401b474db714d929ce717508cb1f765ce77b6559c" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907934 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="632060180af065e41041ad9ffff2395b8df04b6c43e12683432988737705e770" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907942 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="c9a12f754af1b256a78355b7b3a3e4f909d6a20d352cb12126f67304e678fc2f" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.907950 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="357fa21995192338eea4cf618d4dd037f9d9c53370af078711f773adf08dcd27" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908013 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"0cf8a700e82655607fac9d5a91b1fe7324cf5f6a7f959b303b9cc0b73826bb23"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908057 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"0912400cbaf3d64dac1bbb048041811be489912643b2b55d8cdd00b6904d5618"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"974ef80b14499b0e09dc5f020a2ac326c98830e1859b4884bb3296cb6d847ee5"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908084 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"2cc8dee52a522946f284add3268d6d09db41b3c578932c60a72ffe29a472f072"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908097 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"85396d22d1bacb74de0dc15e3d6568fd0dd75a24110e76865a22e9698f5e03d7"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908126 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"18d667fd3f825a0180d5ee212d62dc003f4d18f1f616c39664709f8fd3e22de4"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908140 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"57a4c1b42a22a49c5352631c53b0536257db9090a489fe6d5ac1fae934fe7b10"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908151 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"8300f3dbc427b40576635dc7f78e11d14f9f9f09cfe4720072bd30ea18c40a96"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908165 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"e9e8a5f597dfe66901c775ed9f99ffb45cc88a323355d8c8e377c086ac43eb70"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908176 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"6b315628dc01734b17c3bfcb292a1a9b22d94c5ec378c68879f38fea37cae951"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908189 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"34a647053be6be5646ecc99401b474db714d929ce717508cb1f765ce77b6559c"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"632060180af065e41041ad9ffff2395b8df04b6c43e12683432988737705e770"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908213 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"c9a12f754af1b256a78355b7b3a3e4f909d6a20d352cb12126f67304e678fc2f"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.908228 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"357fa21995192338eea4cf618d4dd037f9d9c53370af078711f773adf08dcd27"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.912506 4813 generic.go:334] "Generic (PLEG): container finished" podID="494a1163-f584-43fa-9224-825be2a90c27" containerID="92a3054a95c28ed9a7e7347b5d0b1b2fbb5102f2edf17c6d56d65e646701fc21" exitCode=143 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.912607 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerDied","Data":"92a3054a95c28ed9a7e7347b5d0b1b2fbb5102f2edf17c6d56d65e646701fc21"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.916892 4813 generic.go:334] "Generic (PLEG): container finished" podID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerID="5aa28b5649ace6006edcf7ca887bd1062f191ab08030423dc4e8ee65bc52dfff" exitCode=0 Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.916976 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerDied","Data":"5aa28b5649ace6006edcf7ca887bd1062f191ab08030423dc4e8ee65bc52dfff"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.917140 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.918785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-t2pxw" event={"ID":"f087a01a-3ee6-4655-9a03-a48fd4f903bb","Type":"ContainerStarted","Data":"aa883881e4d9a0aca2f708555d86be175c3e1114eea01947e724db1c9e7ac781"} Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.926203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.947418 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.947457 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ce996df-c4a1-431e-bae6-d16dfe1491f0-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.947467 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.947477 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.955351 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 17:05:50 crc kubenswrapper[4813]: E0129 17:05:50.955448 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts podName:cbb62edf-bd31-473d-8833-664cb8007f92 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:51.455416538 +0000 UTC m=+2203.942619754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts") pod "root-account-create-update-9wz8r" (UID: "cbb62edf-bd31-473d-8833-664cb8007f92") : configmap "openstack-cell1-scripts" not found Jan 29 17:05:50 crc kubenswrapper[4813]: I0129 17:05:50.976517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "57a6aaa4-f80a-49fa-8236-967825494243" (UID: "57a6aaa4-f80a-49fa-8236-967825494243"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.019151 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.025049 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.051414 4813 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a6aaa4-f80a-49fa-8236-967825494243-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.099234 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:51 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: if [ -n "neutron" ]; then Jan 29 17:05:51 crc kubenswrapper[4813]: GRANT_DATABASE="neutron" Jan 29 17:05:51 crc kubenswrapper[4813]: else Jan 29 17:05:51 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:51 crc kubenswrapper[4813]: fi Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:51 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:51 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:51 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:51 crc kubenswrapper[4813]: # support updates Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.100305 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-cb83-account-create-update-8jlrr" podUID="6d3db8f1-dec6-4406-99e2-fce32fd0792a" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.100928 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:51 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: if [ -n "glance" ]; then Jan 29 17:05:51 crc kubenswrapper[4813]: GRANT_DATABASE="glance" Jan 29 17:05:51 crc kubenswrapper[4813]: else Jan 29 17:05:51 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:51 crc kubenswrapper[4813]: fi Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:51 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:51 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:51 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:51 crc kubenswrapper[4813]: # support updates Jan 29 17:05:51 crc kubenswrapper[4813]: Jan 29 17:05:51 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.102611 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-cf77-account-create-update-t6fp5" podUID="77159d9e-2d2d-42a4-b8c7-77da8a05f9ea" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.321388 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.340009 4813 scope.go:117] "RemoveContainer" containerID="327220da2cf69223161404721d666b2b8165c954c6aeaea6c091d82aed97a8c4" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.360735 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.364813 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.372026 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.374535 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.380283 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-nvszj"] Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.381092 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.381180 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="ovn-northd" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.389518 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.406001 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-qw2k9"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.412491 4813 scope.go:117] "RemoveContainer" containerID="6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.431067 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.460173 4813 scope.go:117] "RemoveContainer" containerID="6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.460878 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0\": container with ID starting with 6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0 not found: ID does not exist" containerID="6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.460929 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0"} err="failed to get container status \"6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0\": rpc error: code = NotFound desc = could not find container \"6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0\": container with ID starting with 6a631fcebb00fc552b0d7cab0910fcb1efb8270e3a40f136cbf640cc3a5144d0 not found: ID does not exist" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.460961 4813 scope.go:117] "RemoveContainer" containerID="ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.465593 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466100 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466161 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4f9\" (UniqueName: \"kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9\") pod \"ade16a11-d1df-4706-9439-d0ef93069a66\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466185 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data\") pod \"ade16a11-d1df-4706-9439-d0ef93069a66\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466221 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle\") pod \"ade16a11-d1df-4706-9439-d0ef93069a66\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466262 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466307 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs\") pod \"ade16a11-d1df-4706-9439-d0ef93069a66\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466381 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466450 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbphm\" (UniqueName: \"kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466516 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466544 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466569 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466661 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs\") pod \"ade16a11-d1df-4706-9439-d0ef93069a66\" (UID: \"ade16a11-d1df-4706-9439-d0ef93069a66\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.466693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config\") pod \"392bc7cc-af71-4ee6-b844-e5adeeabba64\" (UID: \"392bc7cc-af71-4ee6-b844-e5adeeabba64\") " Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.467232 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.467283 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts podName:cbb62edf-bd31-473d-8833-664cb8007f92 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:52.467270375 +0000 UTC m=+2204.954473591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts") pod "root-account-create-update-9wz8r" (UID: "cbb62edf-bd31-473d-8833-664cb8007f92") : configmap "openstack-cell1-scripts" not found Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.468736 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.469430 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.468000 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.493024 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.499580 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9" (OuterVolumeSpecName: "kube-api-access-ww4f9") pod "ade16a11-d1df-4706-9439-d0ef93069a66" (UID: "ade16a11-d1df-4706-9439-d0ef93069a66"). InnerVolumeSpecName "kube-api-access-ww4f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.506257 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm" (OuterVolumeSpecName: "kube-api-access-xbphm") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "kube-api-access-xbphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.513781 4813 scope.go:117] "RemoveContainer" containerID="812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.519680 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.537508 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.556720 4813 scope.go:117] "RemoveContainer" containerID="ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.556975 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data" (OuterVolumeSpecName: "config-data") pod "ade16a11-d1df-4706-9439-d0ef93069a66" (UID: "ade16a11-d1df-4706-9439-d0ef93069a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.560296 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d\": container with ID starting with ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d not found: ID does not exist" containerID="ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.560351 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d"} err="failed to get container status \"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d\": rpc error: code = NotFound desc = could not find container \"ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d\": container with ID starting with ff1c53577caf09269ac31a706dafa63a8db7007aeceaec4ff1290d791003fc7d not found: ID does not exist" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.560388 4813 scope.go:117] "RemoveContainer" containerID="812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.561099 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f\": container with ID starting with 812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f not found: ID does not exist" containerID="812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.561158 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f"} err="failed to get container status \"812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f\": rpc error: code = NotFound desc = could not find container \"812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f\": container with ID starting with 812b31891f50eea711a9d301c19be84014de28293f8cc9b73ef5f297a8164d8f not found: ID does not exist" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.561191 4813 scope.go:117] "RemoveContainer" containerID="9ed8ecb763d33646b3e8cc91f896a3b2de2c4f6990d7c4d2843989603afddfed" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.571577 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade16a11-d1df-4706-9439-d0ef93069a66" (UID: "ade16a11-d1df-4706-9439-d0ef93069a66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.571726 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572246 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572640 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbphm\" (UniqueName: \"kubernetes.io/projected/392bc7cc-af71-4ee6-b844-e5adeeabba64-kube-api-access-xbphm\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572663 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572677 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572693 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572708 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572723 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392bc7cc-af71-4ee6-b844-e5adeeabba64-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572735 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4f9\" (UniqueName: \"kubernetes.io/projected/ade16a11-d1df-4706-9439-d0ef93069a66-kube-api-access-ww4f9\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.572746 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.578226 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.604004 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.604056 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.674770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ade16a11-d1df-4706-9439-d0ef93069a66" (UID: "ade16a11-d1df-4706-9439-d0ef93069a66"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.684129 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ade16a11-d1df-4706-9439-d0ef93069a66" (UID: "ade16a11-d1df-4706-9439-d0ef93069a66"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.689868 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.691531 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.691580 4813 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.692263 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "392bc7cc-af71-4ee6-b844-e5adeeabba64" (UID: "392bc7cc-af71-4ee6-b844-e5adeeabba64"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.695032 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792634 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs\") pod \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792687 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom\") pod \"3c26451a-b543-4a70-8b46-73cd7d92e45c\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792716 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn992\" (UniqueName: \"kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992\") pod \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792756 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g2cs\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792791 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792808 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts\") pod \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle\") pod \"3c26451a-b543-4a70-8b46-73cd7d92e45c\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792890 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom\") pod \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792912 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle\") pod \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.792956 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.793802 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.793833 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsf88\" (UniqueName: \"kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88\") pod \"3c26451a-b543-4a70-8b46-73cd7d92e45c\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.793868 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data\") pod \"3c26451a-b543-4a70-8b46-73cd7d92e45c\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.793885 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs\") pod \"3c26451a-b543-4a70-8b46-73cd7d92e45c\" (UID: \"3c26451a-b543-4a70-8b46-73cd7d92e45c\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.793910 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.794019 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data\") pod \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\" (UID: \"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.794035 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.794056 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58nb4\" (UniqueName: \"kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4\") pod \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\" (UID: \"f087a01a-3ee6-4655-9a03-a48fd4f903bb\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.794072 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs\") pod \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\" (UID: \"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59\") " Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.795399 4813 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/392bc7cc-af71-4ee6-b844-e5adeeabba64-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.795414 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.795425 4813 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade16a11-d1df-4706-9439-d0ef93069a66-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.795473 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:51 crc kubenswrapper[4813]: E0129 17:05:51.795516 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data podName:bda951f8-8354-4ca3-be9e-f92f6fea40cc nodeName:}" failed. No retries permitted until 2026-01-29 17:05:55.795502675 +0000 UTC m=+2208.282705891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data") pod "rabbitmq-cell1-server-0" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.795920 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f087a01a-3ee6-4655-9a03-a48fd4f903bb" (UID: "f087a01a-3ee6-4655-9a03-a48fd4f903bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.798496 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs" (OuterVolumeSpecName: "logs") pod "3c26451a-b543-4a70-8b46-73cd7d92e45c" (UID: "3c26451a-b543-4a70-8b46-73cd7d92e45c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.799287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c26451a-b543-4a70-8b46-73cd7d92e45c" (UID: "3c26451a-b543-4a70-8b46-73cd7d92e45c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.799942 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs" (OuterVolumeSpecName: "logs") pod "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" (UID: "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.802856 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88" (OuterVolumeSpecName: "kube-api-access-nsf88") pod "3c26451a-b543-4a70-8b46-73cd7d92e45c" (UID: "3c26451a-b543-4a70-8b46-73cd7d92e45c"). InnerVolumeSpecName "kube-api-access-nsf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.805057 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992" (OuterVolumeSpecName: "kube-api-access-nn992") pod "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" (UID: "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e"). InnerVolumeSpecName "kube-api-access-nn992". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.805729 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.805901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" (UID: "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.806150 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.813865 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.818899 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs" (OuterVolumeSpecName: "kube-api-access-6g2cs") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "kube-api-access-6g2cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.820363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4" (OuterVolumeSpecName: "kube-api-access-58nb4") pod "f087a01a-3ee6-4655-9a03-a48fd4f903bb" (UID: "f087a01a-3ee6-4655-9a03-a48fd4f903bb"). InnerVolumeSpecName "kube-api-access-58nb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.850119 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" (UID: "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.865016 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c26451a-b543-4a70-8b46-73cd7d92e45c" (UID: "3c26451a-b543-4a70-8b46-73cd7d92e45c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910069 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910125 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58nb4\" (UniqueName: \"kubernetes.io/projected/f087a01a-3ee6-4655-9a03-a48fd4f903bb-kube-api-access-58nb4\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910138 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910149 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910161 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn992\" (UniqueName: \"kubernetes.io/projected/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-kube-api-access-nn992\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910172 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g2cs\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-kube-api-access-6g2cs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910183 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910194 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f087a01a-3ee6-4655-9a03-a48fd4f903bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910204 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910214 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910225 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910234 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910245 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsf88\" (UniqueName: \"kubernetes.io/projected/3c26451a-b543-4a70-8b46-73cd7d92e45c-kube-api-access-nsf88\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.910255 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c26451a-b543-4a70-8b46-73cd7d92e45c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.921832 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.949646 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-8jlrr" event={"ID":"6d3db8f1-dec6-4406-99e2-fce32fd0792a","Type":"ContainerStarted","Data":"119bb47d0a7b1e6bc312a85c8fec75808e9fc0309356a4add0ade57b3524ca15"} Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.950819 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data" (OuterVolumeSpecName: "config-data") pod "3c26451a-b543-4a70-8b46-73cd7d92e45c" (UID: "3c26451a-b543-4a70-8b46-73cd7d92e45c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:51 crc kubenswrapper[4813]: I0129 17:05:51.982159 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data" (OuterVolumeSpecName: "config-data") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:51.984381 4813 generic.go:334] "Generic (PLEG): container finished" podID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerID="d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9" exitCode=0 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:51.984461 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerDied","Data":"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:51.984492 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"392bc7cc-af71-4ee6-b844-e5adeeabba64","Type":"ContainerDied","Data":"0c72a2c16ebdf06dc7c4bb3f3e650b173d9f4cf01570f2f9853480c7034d97c9"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:51.984513 4813 scope.go:117] "RemoveContainer" containerID="d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:51.984679 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.006248 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.007628 4813 generic.go:334] "Generic (PLEG): container finished" podID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerID="d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043" exitCode=0 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.007776 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.007953 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerDied","Data":"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.007992 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-864b755644-8j2zm" event={"ID":"3c26451a-b543-4a70-8b46-73cd7d92e45c","Type":"ContainerDied","Data":"9921a16df065f5224071914630f307aeb05d8de644fdcb8ebb09308052e0fe51"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.013059 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.013087 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.013098 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c26451a-b543-4a70-8b46-73cd7d92e45c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.013220 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.023746 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.026063 4813 generic.go:334] "Generic (PLEG): container finished" podID="ade16a11-d1df-4706-9439-d0ef93069a66" containerID="e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae" exitCode=0 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.026604 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ade16a11-d1df-4706-9439-d0ef93069a66","Type":"ContainerDied","Data":"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.026661 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ade16a11-d1df-4706-9439-d0ef93069a66","Type":"ContainerDied","Data":"655343c962efe84e49a43101f496cc72eedb3000cfdcb75daf6d174c4c5d604f"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.036233 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.042550 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-t6fp5" event={"ID":"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea","Type":"ContainerStarted","Data":"5817b319e42dff97eb55f7996c58d761c6eadde3c517694453af475a1bb75999"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.065781 4813 generic.go:334] "Generic (PLEG): container finished" podID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerID="f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a" exitCode=0 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.065890 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerDied","Data":"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.065937 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58968d868f-mvfqv" event={"ID":"9f2ea2fe-60d7-41f2-bc76-9c58e892bc59","Type":"ContainerDied","Data":"9b61897c69434cc93df241e9a72dc9a553c59738c74c8b9c21283221e1f8a8d0"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.066919 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58968d868f-mvfqv" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.078305 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data" (OuterVolumeSpecName: "config-data") pod "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" (UID: "0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.083181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" (UID: "9f2ea2fe-60d7-41f2-bc76-9c58e892bc59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.117484 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.117757 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.120089 4813 generic.go:334] "Generic (PLEG): container finished" podID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerID="ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891" exitCode=0 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.120324 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerDied","Data":"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.120375 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b9868448c-29tkg" event={"ID":"0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e","Type":"ContainerDied","Data":"24573193c22eb48d12a34100def86f158f24ea567d8a2096503adcd6288de301"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.120450 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b9868448c-29tkg" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.125287 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9248-account-create-update-t2pxw" event={"ID":"f087a01a-3ee6-4655-9a03-a48fd4f903bb","Type":"ContainerDied","Data":"aa883881e4d9a0aca2f708555d86be175c3e1114eea01947e724db1c9e7ac781"} Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.125563 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9248-account-create-update-t2pxw" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.159433 4813 scope.go:117] "RemoveContainer" containerID="575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.226728 4813 scope.go:117] "RemoveContainer" containerID="d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.243190 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9\": container with ID starting with d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9 not found: ID does not exist" containerID="d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.243349 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9"} err="failed to get container status \"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9\": rpc error: code = NotFound desc = could not find container \"d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9\": container with ID starting with d6d0be47e1d90a1d31b0cb9888830d4211f90da2394cb1445794077865f046a9 not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.243375 4813 scope.go:117] "RemoveContainer" containerID="575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.246234 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244\": container with ID starting with 575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244 not found: ID does not exist" containerID="575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.246274 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244"} err="failed to get container status \"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244\": rpc error: code = NotFound desc = could not find container \"575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244\": container with ID starting with 575ab0edfed4084c9b72521824cc25a42b8a5c8b7deefca5da7c7dfdb9d63244 not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.246306 4813 scope.go:117] "RemoveContainer" containerID="d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.278583 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124e029a-6d27-4b49-830c-4be46fc186cc" path="/var/lib/kubelet/pods/124e029a-6d27-4b49-830c-4be46fc186cc/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.279510 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" path="/var/lib/kubelet/pods/1ce996df-c4a1-431e-bae6-d16dfe1491f0/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.282056 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a6aaa4-f80a-49fa-8236-967825494243" path="/var/lib/kubelet/pods/57a6aaa4-f80a-49fa-8236-967825494243/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.283103 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" path="/var/lib/kubelet/pods/b3b9c405-fbca-44fa-820a-1613a7df4c9c/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.295021 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c893ab38-542a-4b23-b2c7-cc6a1a8281a5" path="/var/lib/kubelet/pods/c893ab38-542a-4b23-b2c7-cc6a1a8281a5/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.303962 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" path="/var/lib/kubelet/pods/e994ed3b-ff92-4997-b056-0c3b37fcebcf/volumes" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317781 4813 scope.go:117] "RemoveContainer" containerID="c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317831 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317880 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9248-account-create-update-t2pxw"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317904 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317918 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-864b755644-8j2zm"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317933 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317948 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317961 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.317975 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.341305 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.364486 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5b9868448c-29tkg"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.389072 4813 scope.go:117] "RemoveContainer" containerID="d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.390407 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043\": container with ID starting with d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043 not found: ID does not exist" containerID="d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.390471 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043"} err="failed to get container status \"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043\": rpc error: code = NotFound desc = could not find container \"d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043\": container with ID starting with d924cc8f7ad5cbf0959ea7a95bcea6a806e209de6d8f7e8eb0ece05dd8962043 not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.390499 4813 scope.go:117] "RemoveContainer" containerID="c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.403253 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b\": container with ID starting with c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b not found: ID does not exist" containerID="c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.403298 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b"} err="failed to get container status \"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b\": rpc error: code = NotFound desc = could not find container \"c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b\": container with ID starting with c0d618197fea6e92041227648b901cdfa5000a9611c71c284a6436cded23aa7b not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.403324 4813 scope.go:117] "RemoveContainer" containerID="e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.430288 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.443194 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-58968d868f-mvfqv"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.464861 4813 scope.go:117] "RemoveContainer" containerID="e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.467499 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae\": container with ID starting with e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae not found: ID does not exist" containerID="e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.467644 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae"} err="failed to get container status \"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae\": rpc error: code = NotFound desc = could not find container \"e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae\": container with ID starting with e13df80424b2cf34d4e333b5cc320916c368800db479d2d7df4973d2f38d24ae not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.467745 4813 scope.go:117] "RemoveContainer" containerID="f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.493165 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.505778 4813 scope.go:117] "RemoveContainer" containerID="2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.534709 4813 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.534819 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts podName:cbb62edf-bd31-473d-8833-664cb8007f92 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:54.534771468 +0000 UTC m=+2207.021974684 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts") pod "root-account-create-update-9wz8r" (UID: "cbb62edf-bd31-473d-8833-664cb8007f92") : configmap "openstack-cell1-scripts" not found Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.558953 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.565333 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-central-agent" containerID="cri-o://e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" gracePeriod=30 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.565500 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="proxy-httpd" containerID="cri-o://90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" gracePeriod=30 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.565554 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="sg-core" containerID="cri-o://25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" gracePeriod=30 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.565599 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-notification-agent" containerID="cri-o://c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" gracePeriod=30 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.637033 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.637261 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" containerName="kube-state-metrics" containerID="cri-o://02d822763cfec808d3e70dcfe053468ed8dee511781bd6c54dfa3346b4eb9935" gracePeriod=30 Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.637911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwhf\" (UniqueName: \"kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf\") pod \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.637983 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts\") pod \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\" (UID: \"6d3db8f1-dec6-4406-99e2-fce32fd0792a\") " Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.638709 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": EOF" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.644340 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d3db8f1-dec6-4406-99e2-fce32fd0792a" (UID: "6d3db8f1-dec6-4406-99e2-fce32fd0792a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.647218 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf" (OuterVolumeSpecName: "kube-api-access-8fwhf") pod "6d3db8f1-dec6-4406-99e2-fce32fd0792a" (UID: "6d3db8f1-dec6-4406-99e2-fce32fd0792a"). InnerVolumeSpecName "kube-api-access-8fwhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.662378 4813 scope.go:117] "RemoveContainer" containerID="f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.663425 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a\": container with ID starting with f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a not found: ID does not exist" containerID="f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.663472 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a"} err="failed to get container status \"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a\": rpc error: code = NotFound desc = could not find container \"f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a\": container with ID starting with f984e5b5202bd54115a492e2a3b727f1fa0e963f46e83ac29d03fa85df07832a not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.663504 4813 scope.go:117] "RemoveContainer" containerID="2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.666256 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb\": container with ID starting with 2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb not found: ID does not exist" containerID="2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.666286 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb"} err="failed to get container status \"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb\": rpc error: code = NotFound desc = could not find container \"2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb\": container with ID starting with 2636a834bf848d42b55f4015761f3c10fa306f0f4a11a6841e347ab59b3489bb not found: ID does not exist" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.666320 4813 scope.go:117] "RemoveContainer" containerID="ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.749775 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3db8f1-dec6-4406-99e2-fce32fd0792a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.749817 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fwhf\" (UniqueName: \"kubernetes.io/projected/6d3db8f1-dec6-4406-99e2-fce32fd0792a-kube-api-access-8fwhf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.749891 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.749942 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data podName:6463fe6f-cd6d-4078-8fa2-0d167de480df nodeName:}" failed. No retries permitted until 2026-01-29 17:05:56.749925598 +0000 UTC m=+2209.237128814 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data") pod "rabbitmq-server-0" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df") : configmap "rabbitmq-config-data" not found Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.850386 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.875365 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41220->10.217.0.208:8775: read: connection reset by peer" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.875655 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:41204->10.217.0.208:8775: read: connection reset by peer" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.886009 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.919340 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:43764->10.217.0.168:8776: read: connection reset by peer" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.952963 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts\") pod \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.953042 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp855\" (UniqueName: \"kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855\") pod \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\" (UID: \"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea\") " Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.953949 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77159d9e-2d2d-42a4-b8c7-77da8a05f9ea" (UID: "77159d9e-2d2d-42a4-b8c7-77da8a05f9ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.958936 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855" (OuterVolumeSpecName: "kube-api-access-tp855") pod "77159d9e-2d2d-42a4-b8c7-77da8a05f9ea" (UID: "77159d9e-2d2d-42a4-b8c7-77da8a05f9ea"). InnerVolumeSpecName "kube-api-access-tp855". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.965309 4813 scope.go:117] "RemoveContainer" containerID="085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973508 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973838 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="mysql-bootstrap" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973849 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="mysql-bootstrap" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973858 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973865 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973882 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="ovsdbserver-nb" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973888 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="ovsdbserver-nb" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973897 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973903 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973914 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="init" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973920 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="init" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973931 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="dnsmasq-dns" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973937 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="dnsmasq-dns" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973945 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973951 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973963 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-server" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973968 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-server" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973975 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener-log" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.973981 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener-log" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.973994 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974001 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974011 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-httpd" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974017 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-httpd" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974027 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="galera" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974032 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="galera" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974044 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker-log" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974050 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker-log" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974062 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="ovsdbserver-sb" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974067 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="ovsdbserver-sb" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974075 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade16a11-d1df-4706-9439-d0ef93069a66" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974081 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade16a11-d1df-4706-9439-d0ef93069a66" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:05:52 crc kubenswrapper[4813]: E0129 17:05:52.974092 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974097 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974416 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" containerName="galera" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.974433 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976130 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" containerName="barbican-worker-log" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976146 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener-log" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976154 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="ovsdbserver-nb" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976162 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a6aaa4-f80a-49fa-8236-967825494243" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976175 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce996df-c4a1-431e-bae6-d16dfe1491f0" containerName="dnsmasq-dns" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976187 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-httpd" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976196 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e994ed3b-ff92-4997-b056-0c3b37fcebcf" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976206 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="openstack-network-exporter" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976215 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b9c405-fbca-44fa-820a-1613a7df4c9c" containerName="ovsdbserver-sb" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976221 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" containerName="proxy-server" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976234 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade16a11-d1df-4706-9439-d0ef93069a66" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976244 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" containerName="barbican-keystone-listener" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.976804 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:52 crc kubenswrapper[4813]: I0129 17:05:52.983455 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.004593 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.064954 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b6pbr"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.068686 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts\") pod \"cbb62edf-bd31-473d-8833-664cb8007f92\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.068826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nhx\" (UniqueName: \"kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx\") pod \"cbb62edf-bd31-473d-8833-664cb8007f92\" (UID: \"cbb62edf-bd31-473d-8833-664cb8007f92\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.070256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbb62edf-bd31-473d-8833-664cb8007f92" (UID: "cbb62edf-bd31-473d-8833-664cb8007f92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.076530 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxq2j\" (UniqueName: \"kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.082697 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.082933 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.082946 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbb62edf-bd31-473d-8833-664cb8007f92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.082958 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp855\" (UniqueName: \"kubernetes.io/projected/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea-kube-api-access-tp855\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.090913 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b6pbr"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.101958 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx" (OuterVolumeSpecName: "kube-api-access-l8nhx") pod "cbb62edf-bd31-473d-8833-664cb8007f92" (UID: "cbb62edf-bd31-473d-8833-664cb8007f92"). InnerVolumeSpecName "kube-api-access-l8nhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.185850 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxq2j\" (UniqueName: \"kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.185908 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.186322 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nhx\" (UniqueName: \"kubernetes.io/projected/cbb62edf-bd31-473d-8833-664cb8007f92-kube-api-access-l8nhx\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.187102 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.187336 4813 generic.go:334] "Generic (PLEG): container finished" podID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerID="b2f593231b9c6777313a8a59f22732777b3c29a364623b14415c2eec818f17c5" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.187429 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerDied","Data":"b2f593231b9c6777313a8a59f22732777b3c29a364623b14415c2eec818f17c5"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.221539 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cf77-account-create-update-t6fp5" event={"ID":"77159d9e-2d2d-42a4-b8c7-77da8a05f9ea","Type":"ContainerDied","Data":"5817b319e42dff97eb55f7996c58d761c6eadde3c517694453af475a1bb75999"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.221730 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cf77-account-create-update-t6fp5" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.234911 4813 generic.go:334] "Generic (PLEG): container finished" podID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerID="e05393a1d8d62733f2c992a97a9c85905297b74ae0d29c1c6ef323e9a3b39679" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.235010 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerDied","Data":"e05393a1d8d62733f2c992a97a9c85905297b74ae0d29c1c6ef323e9a3b39679"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.244481 4813 generic.go:334] "Generic (PLEG): container finished" podID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerID="96ce0e98214ecc3dca853a25bbf06658a39336d51fa186c97aff7d03e5d42077" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.244557 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerDied","Data":"96ce0e98214ecc3dca853a25bbf06658a39336d51fa186c97aff7d03e5d42077"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.244588 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7","Type":"ContainerDied","Data":"4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.244601 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf4ffa551658ab6516aaa2d11b20ce75a22e2178d569e3b52c85c53f655d16b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.245872 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxq2j\" (UniqueName: \"kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j\") pod \"root-account-create-update-9bn4b\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.309563 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.309850 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" containerName="memcached" containerID="cri-o://9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c" gracePeriod=30 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.320144 4813 generic.go:334] "Generic (PLEG): container finished" podID="494a1163-f584-43fa-9224-825be2a90c27" containerID="8abb7a4ee4a3b9fc231ace5a03ef91398ee318f728d8debd57c6d16f0aae7ba3" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.320334 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-94cf-account-create-update-fgllc"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.322732 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerDied","Data":"8abb7a4ee4a3b9fc231ace5a03ef91398ee318f728d8debd57c6d16f0aae7ba3"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.322860 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.332730 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.333099 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-94cf-account-create-update-fgllc"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.452828 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.456505 4813 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-9bn4b" secret="" err="secret \"galera-openstack-dockercfg-zc6j8\" not found" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.456565 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.458308 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.458671 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4hd\" (UniqueName: \"kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.459225 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9wz8r" event={"ID":"cbb62edf-bd31-473d-8833-664cb8007f92","Type":"ContainerDied","Data":"dca3d3365652a788209de1983198b74648e4f0da80cb87b54402cc39b3158ff3"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.459399 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9wz8r" Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.464667 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.467908 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.490916 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.491512 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.505047 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.505619 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-757f6546f5-txkdm" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerName="keystone-api" containerID="cri-o://f6218d0edf63e4911af52e8755d9d930f8356969bd5a331cd99184e89398221d" gracePeriod=30 Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.509586 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.509630 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerName="nova-cell0-conductor-conductor" Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.516796 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.516876 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9bd38907-4473-487b-8f79-85baaca96f00" containerName="nova-scheduler-scheduler" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.530713 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29495101-mhdx7"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.536744 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65d9b85856-5rdmb" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:35334->10.217.0.164:9311: read: connection reset by peer" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.536839 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-65d9b85856-5rdmb" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:35318->10.217.0.164:9311: read: connection reset by peer" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.540325 4813 scope.go:117] "RemoveContainer" containerID="ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.541051 4813 generic.go:334] "Generic (PLEG): container finished" podID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" containerID="02d822763cfec808d3e70dcfe053468ed8dee511781bd6c54dfa3346b4eb9935" exitCode=2 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.541143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6471b922-d5d2-4a43-ab3a-fbe69b43beb9","Type":"ContainerDied","Data":"02d822763cfec808d3e70dcfe053468ed8dee511781bd6c54dfa3346b4eb9935"} Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.541196 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891\": container with ID starting with ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891 not found: ID does not exist" containerID="ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.541218 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891"} err="failed to get container status \"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891\": rpc error: code = NotFound desc = could not find container \"ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891\": container with ID starting with ad4e92e5b074d4b37be229c907ec9f5eddde5dc572c03ee52020228f8ae01891 not found: ID does not exist" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.541237 4813 scope.go:117] "RemoveContainer" containerID="085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e" Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.542551 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e\": container with ID starting with 085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e not found: ID does not exist" containerID="085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.542572 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e"} err="failed to get container status \"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e\": rpc error: code = NotFound desc = could not find container \"085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e\": container with ID starting with 085717aaa07ee7d3d4e202211b8efc7961012f8b5c661a77cb95405b55f2bc9e not found: ID does not exist" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.560333 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.560549 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4hd\" (UniqueName: \"kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.560575 4813 generic.go:334] "Generic (PLEG): container finished" podID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerID="90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.560602 4813 generic.go:334] "Generic (PLEG): container finished" podID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerID="25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" exitCode=2 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.560648 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerDied","Data":"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.561133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerDied","Data":"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d"} Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.561246 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.561280 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.561337 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:54.061311902 +0000 UTC m=+2206.548515198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : configmap "openstack-scripts" not found Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.561362 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts podName:056216b9-faea-40b1-860d-eb24a8214b44 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:54.061352253 +0000 UTC m=+2206.548555599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts") pod "root-account-create-update-9bn4b" (UID: "056216b9-faea-40b1-860d-eb24a8214b44") : configmap "openstack-scripts" not found Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.565463 4813 generic.go:334] "Generic (PLEG): container finished" podID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerID="33964ffaf6cb92f00377625e2792de4b5bdd034d751dddf7790ae69bc7e4ed47" exitCode=0 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.565511 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerDied","Data":"33964ffaf6cb92f00377625e2792de4b5bdd034d751dddf7790ae69bc7e4ed47"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.566633 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cb83-account-create-update-8jlrr" event={"ID":"6d3db8f1-dec6-4406-99e2-fce32fd0792a","Type":"ContainerDied","Data":"119bb47d0a7b1e6bc312a85c8fec75808e9fc0309356a4add0ade57b3524ca15"} Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.566709 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cb83-account-create-update-8jlrr" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.571008 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29495101-mhdx7"] Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.572805 4813 projected.go:194] Error preparing data for projected volume kube-api-access-lm4hd for pod openstack/keystone-94cf-account-create-update-fgllc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.572864 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:54.072844139 +0000 UTC m=+2206.560047355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lm4hd" (UniqueName: "kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.599447 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.636089 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.664772 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.664871 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.664907 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.665023 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x69c\" (UniqueName: \"kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.665075 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.665101 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom\") pod \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\" (UID: \"620cdd0f-d89f-4197-90e1-d1f17f4fd7f7\") " Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.670766 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts" (OuterVolumeSpecName: "scripts") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.670836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.679884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.679892 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c" (OuterVolumeSpecName: "kube-api-access-6x69c") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "kube-api-access-6x69c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.697398 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.721772 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cf77-account-create-update-t6fp5"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.769572 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.769595 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.769603 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.769647 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x69c\" (UniqueName: \"kubernetes.io/projected/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-kube-api-access-6x69c\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.777785 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-94cf-account-create-update-fgllc"] Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.794267 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.861352 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="galera" containerID="cri-o://b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f" gracePeriod=30 Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.872681 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:53 crc kubenswrapper[4813]: E0129 17:05:53.953261 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-lm4hd operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-94cf-account-create-update-fgllc" podUID="ae152adf-6c96-4431-9346-f95145c061f4" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.974974 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data" (OuterVolumeSpecName: "config-data") pod "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" (UID: "620cdd0f-d89f-4197-90e1-d1f17f4fd7f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:53 crc kubenswrapper[4813]: I0129 17:05:53.984778 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.013644 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.026650 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9wz8r"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.055268 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.055679 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.065172 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cb83-account-create-update-8jlrr"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.066856 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.073706 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.076656 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4hd\" (UniqueName: \"kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.076774 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.076923 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.077000 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.077056 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:55.077037398 +0000 UTC m=+2207.564240614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : configmap "openstack-scripts" not found Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.077215 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.077272 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts podName:056216b9-faea-40b1-860d-eb24a8214b44 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:55.077255124 +0000 UTC m=+2207.564458340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts") pod "root-account-create-update-9bn4b" (UID: "056216b9-faea-40b1-860d-eb24a8214b44") : configmap "openstack-scripts" not found Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.086859 4813 projected.go:194] Error preparing data for projected volume kube-api-access-lm4hd for pod openstack/keystone-94cf-account-create-update-fgllc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.086944 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:55.086920478 +0000 UTC m=+2207.574123694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lm4hd" (UniqueName: "kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182005 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182090 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182219 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182251 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182356 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config\") pod \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182382 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bc8n\" (UniqueName: \"kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182478 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djg7g\" (UniqueName: \"kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182524 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182559 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle\") pod \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182593 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182620 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njj7r\" (UniqueName: \"kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r\") pod \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182681 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182732 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182743 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs" (OuterVolumeSpecName: "logs") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182757 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182819 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182860 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182884 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182902 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2gf\" (UniqueName: \"kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf\") pod \"cd64c006-aedc-47b2-8704-3b1d05879f8c\" (UID: \"cd64c006-aedc-47b2-8704-3b1d05879f8c\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182922 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182940 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data\") pod \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\" (UID: \"e504c0d2-5734-47b7-aa7f-4cdb2a339d41\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.182957 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs\") pod \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\" (UID: \"6471b922-d5d2-4a43-ab3a-fbe69b43beb9\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.183311 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.190798 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.192704 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.201775 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs" (OuterVolumeSpecName: "logs") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.202968 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g" (OuterVolumeSpecName: "kube-api-access-djg7g") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "kube-api-access-djg7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.203585 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs" (OuterVolumeSpecName: "logs") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.219519 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf" (OuterVolumeSpecName: "kube-api-access-hr2gf") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "kube-api-access-hr2gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.224892 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n" (OuterVolumeSpecName: "kube-api-access-2bc8n") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "kube-api-access-2bc8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.224891 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts" (OuterVolumeSpecName: "scripts") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.229635 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r" (OuterVolumeSpecName: "kube-api-access-njj7r") pod "6471b922-d5d2-4a43-ab3a-fbe69b43beb9" (UID: "6471b922-d5d2-4a43-ab3a-fbe69b43beb9"). InnerVolumeSpecName "kube-api-access-njj7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.230416 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts" (OuterVolumeSpecName: "scripts") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.251378 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e" path="/var/lib/kubelet/pods/0e4027aa-3ee1-4c5c-82c1-ac1fda16d98e/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.253590 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392bc7cc-af71-4ee6-b844-e5adeeabba64" path="/var/lib/kubelet/pods/392bc7cc-af71-4ee6-b844-e5adeeabba64/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.254627 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c26451a-b543-4a70-8b46-73cd7d92e45c" path="/var/lib/kubelet/pods/3c26451a-b543-4a70-8b46-73cd7d92e45c/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.255251 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3db8f1-dec6-4406-99e2-fce32fd0792a" path="/var/lib/kubelet/pods/6d3db8f1-dec6-4406-99e2-fce32fd0792a/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.256632 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77159d9e-2d2d-42a4-b8c7-77da8a05f9ea" path="/var/lib/kubelet/pods/77159d9e-2d2d-42a4-b8c7-77da8a05f9ea/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.257151 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f2ea2fe-60d7-41f2-bc76-9c58e892bc59" path="/var/lib/kubelet/pods/9f2ea2fe-60d7-41f2-bc76-9c58e892bc59/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.257893 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28a20c4-1155-4d60-b6be-011bb1479366" path="/var/lib/kubelet/pods/a28a20c4-1155-4d60-b6be-011bb1479366/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.258610 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade16a11-d1df-4706-9439-d0ef93069a66" path="/var/lib/kubelet/pods/ade16a11-d1df-4706-9439-d0ef93069a66/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.259769 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ab9d45-b855-4c1a-9e32-24c9d3052f52" path="/var/lib/kubelet/pods/c9ab9d45-b855-4c1a-9e32-24c9d3052f52/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.260350 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb62edf-bd31-473d-8833-664cb8007f92" path="/var/lib/kubelet/pods/cbb62edf-bd31-473d-8833-664cb8007f92/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.260722 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f087a01a-3ee6-4655-9a03-a48fd4f903bb" path="/var/lib/kubelet/pods/f087a01a-3ee6-4655-9a03-a48fd4f903bb/volumes" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284752 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bc8n\" (UniqueName: \"kubernetes.io/projected/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-kube-api-access-2bc8n\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284786 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djg7g\" (UniqueName: \"kubernetes.io/projected/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-kube-api-access-djg7g\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284796 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njj7r\" (UniqueName: \"kubernetes.io/projected/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-api-access-njj7r\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284804 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284812 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284820 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd64c006-aedc-47b2-8704-3b1d05879f8c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284829 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284837 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr2gf\" (UniqueName: \"kubernetes.io/projected/cd64c006-aedc-47b2-8704-3b1d05879f8c-kube-api-access-hr2gf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284845 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.284852 4813 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd64c006-aedc-47b2-8704-3b1d05879f8c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.295537 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6471b922-d5d2-4a43-ab3a-fbe69b43beb9" (UID: "6471b922-d5d2-4a43-ab3a-fbe69b43beb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.334311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.339312 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.347870 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.353981 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.358402 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.386642 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.386774 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") pod \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\" (UID: \"8892a0d0-88f2-4e0a-aafb-20e04d9e6289\") " Jan 29 17:05:54 crc kubenswrapper[4813]: W0129 17:05:54.387787 4813 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8892a0d0-88f2-4e0a-aafb-20e04d9e6289/volumes/kubernetes.io~secret/nova-metadata-tls-certs Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.387805 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.390866 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.390919 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.390933 4813 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.390947 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.390959 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.408622 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data" (OuterVolumeSpecName: "config-data") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.408726 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data" (OuterVolumeSpecName: "config-data") pod "8892a0d0-88f2-4e0a-aafb-20e04d9e6289" (UID: "8892a0d0-88f2-4e0a-aafb-20e04d9e6289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.412620 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.421023 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6471b922-d5d2-4a43-ab3a-fbe69b43beb9" (UID: "6471b922-d5d2-4a43-ab3a-fbe69b43beb9"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.445308 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data" (OuterVolumeSpecName: "config-data") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.445395 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd64c006-aedc-47b2-8704-3b1d05879f8c" (UID: "cd64c006-aedc-47b2-8704-3b1d05879f8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.448654 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.482772 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6471b922-d5d2-4a43-ab3a-fbe69b43beb9" (UID: "6471b922-d5d2-4a43-ab3a-fbe69b43beb9"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492379 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492409 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492440 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492497 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492568 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86vwf\" (UniqueName: \"kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492618 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492659 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrvl\" (UniqueName: \"kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492706 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492743 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492826 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492852 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs\") pod \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\" (UID: \"d58049e2-10b0-4b6d-9a9e-d90420a1cecb\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.492894 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle\") pod \"494a1163-f584-43fa-9224-825be2a90c27\" (UID: \"494a1163-f584-43fa-9224-825be2a90c27\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494535 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8892a0d0-88f2-4e0a-aafb-20e04d9e6289-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494566 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494582 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494593 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494605 4813 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494617 4813 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6471b922-d5d2-4a43-ab3a-fbe69b43beb9-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494628 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.494640 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd64c006-aedc-47b2-8704-3b1d05879f8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.495055 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs" (OuterVolumeSpecName: "logs") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.495713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs" (OuterVolumeSpecName: "logs") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.496875 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.497283 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl" (OuterVolumeSpecName: "kube-api-access-xfrvl") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "kube-api-access-xfrvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.516005 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts" (OuterVolumeSpecName: "scripts") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.517358 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.518027 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf" (OuterVolumeSpecName: "kube-api-access-86vwf") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "kube-api-access-86vwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.556699 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.585747 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.595451 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.595485 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd64c006-aedc-47b2-8704-3b1d05879f8c","Type":"ContainerDied","Data":"3490996d3aad804b072a49ce30c9f6d370b0844f749b53aa9772ee7c8c5eab25"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.595532 4813 scope.go:117] "RemoveContainer" containerID="33964ffaf6cb92f00377625e2792de4b5bdd034d751dddf7790ae69bc7e4ed47" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596048 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596093 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596146 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596187 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596253 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596284 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596362 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596388 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596475 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w898c\" (UniqueName: \"kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596502 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596527 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596571 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle\") pod \"dd942f21-0785-443e-ab04-27548ecd9207\" (UID: \"dd942f21-0785-443e-ab04-27548ecd9207\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596658 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgpq5\" (UniqueName: \"kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.596735 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs\") pod \"296a369a-d653-4033-9a10-7077b62875f1\" (UID: \"296a369a-d653-4033-9a10-7077b62875f1\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597246 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597279 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597292 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86vwf\" (UniqueName: \"kubernetes.io/projected/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-kube-api-access-86vwf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597306 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597319 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrvl\" (UniqueName: \"kubernetes.io/projected/494a1163-f584-43fa-9224-825be2a90c27-kube-api-access-xfrvl\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597330 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a1163-f584-43fa-9224-825be2a90c27-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597341 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.597327 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.598597 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs" (OuterVolumeSpecName: "logs") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.600909 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.602412 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs" (OuterVolumeSpecName: "logs") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.611910 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts" (OuterVolumeSpecName: "scripts") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.612009 4813 generic.go:334] "Generic (PLEG): container finished" podID="dd942f21-0785-443e-ab04-27548ecd9207" containerID="ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.612082 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerDied","Data":"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.612136 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65d9b85856-5rdmb" event={"ID":"dd942f21-0785-443e-ab04-27548ecd9207","Type":"ContainerDied","Data":"12b3250852d091e4650dc92a412de53c26b0015c80225f790b4c807478790e2e"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.612205 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65d9b85856-5rdmb" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.614082 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c" (OuterVolumeSpecName: "kube-api-access-w898c") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "kube-api-access-w898c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.618529 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.622499 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7dcb464dcd-dklmw" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.622941 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7dcb464dcd-dklmw" event={"ID":"e504c0d2-5734-47b7-aa7f-4cdb2a339d41","Type":"ContainerDied","Data":"52ef1d8e954414dc903d1f35dbdc42e5301f7aca7139b39bab67daf2642e0526"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.629765 4813 generic.go:334] "Generic (PLEG): container finished" podID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerID="38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.629841 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51bbe82b-cffe-4d8b-ac7d-55507916528e","Type":"ContainerDied","Data":"38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.632053 4813 generic.go:334] "Generic (PLEG): container finished" podID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerID="34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.632137 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.632143 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerDied","Data":"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.632186 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d58049e2-10b0-4b6d-9a9e-d90420a1cecb","Type":"ContainerDied","Data":"0befb1043a9c84c4170f19346e523cf5f821d0577e6a17325262d8cecadd9400"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.641320 4813 generic.go:334] "Generic (PLEG): container finished" podID="296a369a-d653-4033-9a10-7077b62875f1" containerID="c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.641388 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.641397 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerDied","Data":"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.641578 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"296a369a-d653-4033-9a10-7077b62875f1","Type":"ContainerDied","Data":"a6a93a4957fda92aefa746e5ed86fbe72b0597c59cfa05a597c47a63d8c4434a"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.644749 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.647956 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data" (OuterVolumeSpecName: "config-data") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.666370 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.666431 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5" (OuterVolumeSpecName: "kube-api-access-bgpq5") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "kube-api-access-bgpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.667223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.673914 4813 scope.go:117] "RemoveContainer" containerID="8a7ea31b28e2c6bfe834bf71fe3196b448efd62becd905ed803c99a217f36124" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.674819 4813 generic.go:334] "Generic (PLEG): container finished" podID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerID="c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.674855 4813 generic.go:334] "Generic (PLEG): container finished" podID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerID="e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.674932 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerDied","Data":"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.674964 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerDied","Data":"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.674977 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a75d1-c71b-4af5-9fe8-83b53b60b86e","Type":"ContainerDied","Data":"6f5a7d6852568464b0d76f0a5aed82a725db41cb37d996a42cff1211c66e26ed"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699365 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699394 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699406 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd942f21-0785-443e-ab04-27548ecd9207-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699436 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699447 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699457 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w898c\" (UniqueName: \"kubernetes.io/projected/dd942f21-0785-443e-ab04-27548ecd9207-kube-api-access-w898c\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699466 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699474 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699482 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699491 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgpq5\" (UniqueName: \"kubernetes.io/projected/296a369a-d653-4033-9a10-7077b62875f1-kube-api-access-bgpq5\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.699502 4813 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/296a369a-d653-4033-9a10-7077b62875f1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.701933 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"494a1163-f584-43fa-9224-825be2a90c27","Type":"ContainerDied","Data":"6b7ef72bcf0a50683fd1b6f1c4256b15e025ec99f59800ac0452007736b919c5"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.702012 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.708078 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.709274 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8892a0d0-88f2-4e0a-aafb-20e04d9e6289","Type":"ContainerDied","Data":"794c7daea23eb33ec52d254a1732370f452682d28020a1e42fce9c727f21e78b"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.717405 4813 generic.go:334] "Generic (PLEG): container finished" podID="9bd38907-4473-487b-8f79-85baaca96f00" containerID="4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" exitCode=0 Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.717815 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bd38907-4473-487b-8f79-85baaca96f00","Type":"ContainerDied","Data":"4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.720689 4813 scope.go:117] "RemoveContainer" containerID="ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.737969 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.738341 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6471b922-d5d2-4a43-ab3a-fbe69b43beb9","Type":"ContainerDied","Data":"8fc0774379309064b6b334f290fda2d6ccfe576f3140132a8b2603e23940684d"} Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.738521 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.738813 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.741442 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.750564 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.769888 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.773816 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.774996 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.777749 4813 scope.go:117] "RemoveContainer" containerID="a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.786450 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.789973 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.790262 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d58049e2-10b0-4b6d-9a9e-d90420a1cecb" (UID: "d58049e2-10b0-4b6d-9a9e-d90420a1cecb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.796775 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805099 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805505 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805613 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805782 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805861 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.805937 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwwt6\" (UniqueName: \"kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.807212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.807327 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd\") pod \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\" (UID: \"a37a75d1-c71b-4af5-9fe8-83b53b60b86e\") " Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809255 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809367 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809399 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809413 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809426 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.809569 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d58049e2-10b0-4b6d-9a9e-d90420a1cecb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.811336 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data" (OuterVolumeSpecName: "config-data") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.811414 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.812446 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e504c0d2-5734-47b7-aa7f-4cdb2a339d41" (UID: "e504c0d2-5734-47b7-aa7f-4cdb2a339d41"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.813228 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.834752 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6" (OuterVolumeSpecName: "kube-api-access-jwwt6") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "kube-api-access-jwwt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.834758 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts" (OuterVolumeSpecName: "scripts") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.835047 4813 scope.go:117] "RemoveContainer" containerID="ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.835178 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.838062 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.842476 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8\": container with ID starting with ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8 not found: ID does not exist" containerID="ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.842524 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8"} err="failed to get container status \"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8\": rpc error: code = NotFound desc = could not find container \"ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8\": container with ID starting with ef8bb44ec91c879c16be20c071be118df6ef2b6595139a8f993990a40dbb59f8 not found: ID does not exist" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.842550 4813 scope.go:117] "RemoveContainer" containerID="a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a" Jan 29 17:05:54 crc kubenswrapper[4813]: E0129 17:05:54.843214 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a\": container with ID starting with a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a not found: ID does not exist" containerID="a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.843248 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a"} err="failed to get container status \"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a\": rpc error: code = NotFound desc = could not find container \"a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a\": container with ID starting with a37478735dc240112bccf1463d6923f3dd4897f723833d25cba21676145d817a not found: ID does not exist" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.843267 4813 scope.go:117] "RemoveContainer" containerID="b2f593231b9c6777313a8a59f22732777b3c29a364623b14415c2eec818f17c5" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.849556 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.869408 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.106:11211: connect: connection refused" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.872757 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.902510 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911619 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e504c0d2-5734-47b7-aa7f-4cdb2a339d41-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911647 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwwt6\" (UniqueName: \"kubernetes.io/projected/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-kube-api-access-jwwt6\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911658 4813 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911666 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911674 4813 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911683 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911691 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.911699 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.912555 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data" (OuterVolumeSpecName: "config-data") pod "dd942f21-0785-443e-ab04-27548ecd9207" (UID: "dd942f21-0785-443e-ab04-27548ecd9207"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.929038 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.936399 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.953528 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "494a1163-f584-43fa-9224-825be2a90c27" (UID: "494a1163-f584-43fa-9224-825be2a90c27"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.958841 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.967232 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:54 crc kubenswrapper[4813]: I0129 17:05:54.967275 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.000061 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data" (OuterVolumeSpecName: "config-data") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013384 4813 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013418 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd942f21-0785-443e-ab04-27548ecd9207-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013432 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013444 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013455 4813 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013466 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/494a1163-f584-43fa-9224-825be2a90c27-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.013477 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.030055 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data" (OuterVolumeSpecName: "config-data") pod "a37a75d1-c71b-4af5-9fe8-83b53b60b86e" (UID: "a37a75d1-c71b-4af5-9fe8-83b53b60b86e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: W0129 17:05:55.045778 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056216b9_faea_40b1_860d_eb24a8214b44.slice/crio-636d053af94fd9cf8a374e5ecee74653dd1ebeac09b284f9af57ff34a4e9ec20 WatchSource:0}: Error finding container 636d053af94fd9cf8a374e5ecee74653dd1ebeac09b284f9af57ff34a4e9ec20: Status 404 returned error can't find the container with id 636d053af94fd9cf8a374e5ecee74653dd1ebeac09b284f9af57ff34a4e9ec20 Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.054327 4813 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 17:05:55 crc kubenswrapper[4813]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: if [ -n "" ]; then Jan 29 17:05:55 crc kubenswrapper[4813]: GRANT_DATABASE="" Jan 29 17:05:55 crc kubenswrapper[4813]: else Jan 29 17:05:55 crc kubenswrapper[4813]: GRANT_DATABASE="*" Jan 29 17:05:55 crc kubenswrapper[4813]: fi Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: # going for maximum compatibility here: Jan 29 17:05:55 crc kubenswrapper[4813]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 17:05:55 crc kubenswrapper[4813]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 17:05:55 crc kubenswrapper[4813]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 17:05:55 crc kubenswrapper[4813]: # support updates Jan 29 17:05:55 crc kubenswrapper[4813]: Jan 29 17:05:55 crc kubenswrapper[4813]: $MYSQL_CMD < logger="UnhandledError" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.056354 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9bn4b" podUID="056216b9-faea-40b1-860d-eb24a8214b44" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.057823 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.109422 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "296a369a-d653-4033-9a10-7077b62875f1" (UID: "296a369a-d653-4033-9a10-7077b62875f1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.115356 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4hd\" (UniqueName: \"kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.115478 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts\") pod \"keystone-94cf-account-create-update-fgllc\" (UID: \"ae152adf-6c96-4431-9346-f95145c061f4\") " pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.115613 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a75d1-c71b-4af5-9fe8-83b53b60b86e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.115632 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/296a369a-d653-4033-9a10-7077b62875f1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.115704 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.115755 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:57.115738786 +0000 UTC m=+2209.602942002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : configmap "openstack-scripts" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.115910 4813 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.115978 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts podName:056216b9-faea-40b1-860d-eb24a8214b44 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:57.115959702 +0000 UTC m=+2209.603162988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts") pod "root-account-create-update-9bn4b" (UID: "056216b9-faea-40b1-860d-eb24a8214b44") : configmap "openstack-scripts" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.118919 4813 projected.go:194] Error preparing data for projected volume kube-api-access-lm4hd for pod openstack/keystone-94cf-account-create-update-fgllc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.119126 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd podName:ae152adf-6c96-4431-9346-f95145c061f4 nodeName:}" failed. No retries permitted until 2026-01-29 17:05:57.11907724 +0000 UTC m=+2209.606280456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lm4hd" (UniqueName: "kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd") pod "keystone-94cf-account-create-update-fgllc" (UID: "ae152adf-6c96-4431-9346-f95145c061f4") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.194634 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.199891 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.202549 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.202586 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.203303 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.221013 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle\") pod \"9bd38907-4473-487b-8f79-85baaca96f00\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.221214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data\") pod \"9bd38907-4473-487b-8f79-85baaca96f00\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.221236 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qrd\" (UniqueName: \"kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd\") pod \"9bd38907-4473-487b-8f79-85baaca96f00\" (UID: \"9bd38907-4473-487b-8f79-85baaca96f00\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.227066 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.236577 4813 scope.go:117] "RemoveContainer" containerID="e28682307f39725fe7f765dc526b169a558cafd630133cfcaa46e4c31f5198f9" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.263328 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd38907-4473-487b-8f79-85baaca96f00" (UID: "9bd38907-4473-487b-8f79-85baaca96f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.264750 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd" (OuterVolumeSpecName: "kube-api-access-j7qrd") pod "9bd38907-4473-487b-8f79-85baaca96f00" (UID: "9bd38907-4473-487b-8f79-85baaca96f00"). InnerVolumeSpecName "kube-api-access-j7qrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.276672 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.277258 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.280772 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.289513 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.293907 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.293945 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.298231 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.298318 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.298745 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.315139 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.323417 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data" (OuterVolumeSpecName: "config-data") pod "9bd38907-4473-487b-8f79-85baaca96f00" (UID: "9bd38907-4473-487b-8f79-85baaca96f00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.324097 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data\") pod \"51bbe82b-cffe-4d8b-ac7d-55507916528e\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.324214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle\") pod \"51bbe82b-cffe-4d8b-ac7d-55507916528e\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.324371 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swzb7\" (UniqueName: \"kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7\") pod \"51bbe82b-cffe-4d8b-ac7d-55507916528e\" (UID: \"51bbe82b-cffe-4d8b-ac7d-55507916528e\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.326269 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.326298 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qrd\" (UniqueName: \"kubernetes.io/projected/9bd38907-4473-487b-8f79-85baaca96f00-kube-api-access-j7qrd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.326461 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd38907-4473-487b-8f79-85baaca96f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.328665 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.337239 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.337277 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-65d9b85856-5rdmb"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.341712 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7" (OuterVolumeSpecName: "kube-api-access-swzb7") pod "51bbe82b-cffe-4d8b-ac7d-55507916528e" (UID: "51bbe82b-cffe-4d8b-ac7d-55507916528e"). InnerVolumeSpecName "kube-api-access-swzb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.345972 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.353982 4813 scope.go:117] "RemoveContainer" containerID="34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.355542 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7dcb464dcd-dklmw"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.362526 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.369042 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.374549 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51bbe82b-cffe-4d8b-ac7d-55507916528e" (UID: "51bbe82b-cffe-4d8b-ac7d-55507916528e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.375266 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.377783 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data" (OuterVolumeSpecName: "config-data") pod "51bbe82b-cffe-4d8b-ac7d-55507916528e" (UID: "51bbe82b-cffe-4d8b-ac7d-55507916528e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.380775 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.382038 4813 scope.go:117] "RemoveContainer" containerID="6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.405596 4813 scope.go:117] "RemoveContainer" containerID="34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.405989 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567\": container with ID starting with 34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567 not found: ID does not exist" containerID="34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.406044 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567"} err="failed to get container status \"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567\": rpc error: code = NotFound desc = could not find container \"34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567\": container with ID starting with 34ab2fa5678ff4e1187324c00e8a655b685401de59456a7ccf6bc46f71745567 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.406064 4813 scope.go:117] "RemoveContainer" containerID="6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.406722 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4\": container with ID starting with 6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4 not found: ID does not exist" containerID="6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.406782 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4"} err="failed to get container status \"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4\": rpc error: code = NotFound desc = could not find container \"6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4\": container with ID starting with 6a99efd8ff60bf738fb2b3557f0ace52a29d5b538349a3f0d797ce2115070cf4 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.406843 4813 scope.go:117] "RemoveContainer" containerID="c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.427463 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs\") pod \"6067c320-64f2-4c71-b4b0-bd136749200f\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.427641 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data\") pod \"6067c320-64f2-4c71-b4b0-bd136749200f\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.427698 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnglg\" (UniqueName: \"kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg\") pod \"6067c320-64f2-4c71-b4b0-bd136749200f\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.427755 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config\") pod \"6067c320-64f2-4c71-b4b0-bd136749200f\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.427799 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle\") pod \"6067c320-64f2-4c71-b4b0-bd136749200f\" (UID: \"6067c320-64f2-4c71-b4b0-bd136749200f\") " Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.428487 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.428515 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51bbe82b-cffe-4d8b-ac7d-55507916528e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.428530 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swzb7\" (UniqueName: \"kubernetes.io/projected/51bbe82b-cffe-4d8b-ac7d-55507916528e-kube-api-access-swzb7\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.429892 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6067c320-64f2-4c71-b4b0-bd136749200f" (UID: "6067c320-64f2-4c71-b4b0-bd136749200f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.432607 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data" (OuterVolumeSpecName: "config-data") pod "6067c320-64f2-4c71-b4b0-bd136749200f" (UID: "6067c320-64f2-4c71-b4b0-bd136749200f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.434504 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg" (OuterVolumeSpecName: "kube-api-access-pnglg") pod "6067c320-64f2-4c71-b4b0-bd136749200f" (UID: "6067c320-64f2-4c71-b4b0-bd136749200f"). InnerVolumeSpecName "kube-api-access-pnglg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.437430 4813 scope.go:117] "RemoveContainer" containerID="a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.467723 4813 scope.go:117] "RemoveContainer" containerID="c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.469454 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6067c320-64f2-4c71-b4b0-bd136749200f" (UID: "6067c320-64f2-4c71-b4b0-bd136749200f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.469950 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04\": container with ID starting with c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04 not found: ID does not exist" containerID="c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.470012 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04"} err="failed to get container status \"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04\": rpc error: code = NotFound desc = could not find container \"c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04\": container with ID starting with c129febaa4b63cdc17abd0c18ffc6aa57fd17a95834d4782dc550c62d5434a04 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.470041 4813 scope.go:117] "RemoveContainer" containerID="a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.470471 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b\": container with ID starting with a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b not found: ID does not exist" containerID="a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.470504 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b"} err="failed to get container status \"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b\": rpc error: code = NotFound desc = could not find container \"a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b\": container with ID starting with a9103be6ada5bb27f922e421ecc09efea33a49cba04ee7717d16dbc5a0aa677b not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.470523 4813 scope.go:117] "RemoveContainer" containerID="90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.476352 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "6067c320-64f2-4c71-b4b0-bd136749200f" (UID: "6067c320-64f2-4c71-b4b0-bd136749200f"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.496868 4813 scope.go:117] "RemoveContainer" containerID="25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.525327 4813 scope.go:117] "RemoveContainer" containerID="c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.530185 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.530212 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnglg\" (UniqueName: \"kubernetes.io/projected/6067c320-64f2-4c71-b4b0-bd136749200f-kube-api-access-pnglg\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.530222 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6067c320-64f2-4c71-b4b0-bd136749200f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.530230 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.530239 4813 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6067c320-64f2-4c71-b4b0-bd136749200f-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.558705 4813 scope.go:117] "RemoveContainer" containerID="e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.579476 4813 scope.go:117] "RemoveContainer" containerID="90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.579911 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea\": container with ID starting with 90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea not found: ID does not exist" containerID="90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.579939 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea"} err="failed to get container status \"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea\": rpc error: code = NotFound desc = could not find container \"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea\": container with ID starting with 90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.579964 4813 scope.go:117] "RemoveContainer" containerID="25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.580540 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d\": container with ID starting with 25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d not found: ID does not exist" containerID="25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.580563 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d"} err="failed to get container status \"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d\": rpc error: code = NotFound desc = could not find container \"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d\": container with ID starting with 25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.580577 4813 scope.go:117] "RemoveContainer" containerID="c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.580815 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098\": container with ID starting with c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098 not found: ID does not exist" containerID="c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.580831 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098"} err="failed to get container status \"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098\": rpc error: code = NotFound desc = could not find container \"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098\": container with ID starting with c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.580845 4813 scope.go:117] "RemoveContainer" containerID="e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.581062 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072\": container with ID starting with e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072 not found: ID does not exist" containerID="e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.581079 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072"} err="failed to get container status \"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072\": rpc error: code = NotFound desc = could not find container \"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072\": container with ID starting with e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.581090 4813 scope.go:117] "RemoveContainer" containerID="90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.581619 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea"} err="failed to get container status \"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea\": rpc error: code = NotFound desc = could not find container \"90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea\": container with ID starting with 90876dfac497fba47ff52ed31e77f3bfed256faeba118505a57e972c512775ea not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.581636 4813 scope.go:117] "RemoveContainer" containerID="25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.581989 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d"} err="failed to get container status \"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d\": rpc error: code = NotFound desc = could not find container \"25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d\": container with ID starting with 25324d0071a12c9937a4e3d3daa048e9fc77a79e2ae81d05e9ef5e7bcc77fc4d not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.582006 4813 scope.go:117] "RemoveContainer" containerID="c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.582284 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098"} err="failed to get container status \"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098\": rpc error: code = NotFound desc = could not find container \"c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098\": container with ID starting with c4e05798c30c02bccc4e1a2b5b94ea6d3f0642ea4fc5071691d9a8aefe56e098 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.582299 4813 scope.go:117] "RemoveContainer" containerID="e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.582512 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072"} err="failed to get container status \"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072\": rpc error: code = NotFound desc = could not find container \"e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072\": container with ID starting with e118bfae09f3397a28702db1f9ad79d4b01c3fce7b9b9a9b51ba3fb8e0484072 not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.582528 4813 scope.go:117] "RemoveContainer" containerID="8abb7a4ee4a3b9fc231ace5a03ef91398ee318f728d8debd57c6d16f0aae7ba3" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.611190 4813 scope.go:117] "RemoveContainer" containerID="92a3054a95c28ed9a7e7347b5d0b1b2fbb5102f2edf17c6d56d65e646701fc21" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.644132 4813 scope.go:117] "RemoveContainer" containerID="e05393a1d8d62733f2c992a97a9c85905297b74ae0d29c1c6ef323e9a3b39679" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.676510 4813 scope.go:117] "RemoveContainer" containerID="9f57c05f59032c6a2d1a7cccf2fb5f3a52e7578aa8e9376e28838edd405492ff" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.696791 4813 scope.go:117] "RemoveContainer" containerID="02d822763cfec808d3e70dcfe053468ed8dee511781bd6c54dfa3346b4eb9935" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.752046 4813 generic.go:334] "Generic (PLEG): container finished" podID="6067c320-64f2-4c71-b4b0-bd136749200f" containerID="9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c" exitCode=0 Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.752152 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6067c320-64f2-4c71-b4b0-bd136749200f","Type":"ContainerDied","Data":"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c"} Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.752181 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6067c320-64f2-4c71-b4b0-bd136749200f","Type":"ContainerDied","Data":"f5e4e971210574647e0a6a98de87bf483a83477fafefdf2858d6b58021ac8c73"} Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.752202 4813 scope.go:117] "RemoveContainer" containerID="9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.752224 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.762961 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9bn4b" event={"ID":"056216b9-faea-40b1-860d-eb24a8214b44","Type":"ContainerStarted","Data":"636d053af94fd9cf8a374e5ecee74653dd1ebeac09b284f9af57ff34a4e9ec20"} Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.772052 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bd38907-4473-487b-8f79-85baaca96f00","Type":"ContainerDied","Data":"3a751ce11281774ab13527a33687fb7838754b1f8dc3d495e303f4eba9c5dc8d"} Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.772131 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.775978 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.794435 4813 scope.go:117] "RemoveContainer" containerID="9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.796017 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c\": container with ID starting with 9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c not found: ID does not exist" containerID="9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.796051 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c"} err="failed to get container status \"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c\": rpc error: code = NotFound desc = could not find container \"9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c\": container with ID starting with 9c486fa0bfb27e124bc52ed4ea9c6425e8fa2b9b48b6df500f018a77649a6b8c not found: ID does not exist" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.796073 4813 scope.go:117] "RemoveContainer" containerID="4b0358d90883c6e8478c772d971d2902eb1bde203a499dda781652c9c94d54fc" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.800488 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.806781 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"51bbe82b-cffe-4d8b-ac7d-55507916528e","Type":"ContainerDied","Data":"992e49d9cf399d77f2b8f65e990c0a0a20edd2b805f7467e27d13c8c46164946"} Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.806853 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.826536 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-94cf-account-create-update-fgllc" Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.829695 4813 scope.go:117] "RemoveContainer" containerID="38b2d85eb15f915f427130dc15b7810017d065f72bae0ee347154971510344f8" Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.835937 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:55 crc kubenswrapper[4813]: E0129 17:05:55.835996 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data podName:bda951f8-8354-4ca3-be9e-f92f6fea40cc nodeName:}" failed. No retries permitted until 2026-01-29 17:06:03.835978731 +0000 UTC m=+2216.323181947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data") pod "rabbitmq-cell1-server-0" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc") : configmap "rabbitmq-cell1-config-data" not found Jan 29 17:05:55 crc kubenswrapper[4813]: I0129 17:05:55.881314 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" probeResult="failure" output=< Jan 29 17:05:55 crc kubenswrapper[4813]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 29 17:05:55 crc kubenswrapper[4813]: > Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.071811 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.079931 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.094580 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.104181 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.156601 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-94cf-account-create-update-fgllc"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.163222 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-94cf-account-create-update-fgllc"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.165771 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.172268 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.178891 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.184083 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.254415 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae152adf-6c96-4431-9346-f95145c061f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.254444 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm4hd\" (UniqueName: \"kubernetes.io/projected/ae152adf-6c96-4431-9346-f95145c061f4-kube-api-access-lm4hd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.257911 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296a369a-d653-4033-9a10-7077b62875f1" path="/var/lib/kubelet/pods/296a369a-d653-4033-9a10-7077b62875f1/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.259092 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494a1163-f584-43fa-9224-825be2a90c27" path="/var/lib/kubelet/pods/494a1163-f584-43fa-9224-825be2a90c27/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.260127 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" path="/var/lib/kubelet/pods/51bbe82b-cffe-4d8b-ac7d-55507916528e/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.261463 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" path="/var/lib/kubelet/pods/6067c320-64f2-4c71-b4b0-bd136749200f/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.262143 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" path="/var/lib/kubelet/pods/620cdd0f-d89f-4197-90e1-d1f17f4fd7f7/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.262928 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" path="/var/lib/kubelet/pods/6471b922-d5d2-4a43-ab3a-fbe69b43beb9/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.264840 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" path="/var/lib/kubelet/pods/8892a0d0-88f2-4e0a-aafb-20e04d9e6289/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.265435 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd38907-4473-487b-8f79-85baaca96f00" path="/var/lib/kubelet/pods/9bd38907-4473-487b-8f79-85baaca96f00/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.265972 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" path="/var/lib/kubelet/pods/a37a75d1-c71b-4af5-9fe8-83b53b60b86e/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.267094 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae152adf-6c96-4431-9346-f95145c061f4" path="/var/lib/kubelet/pods/ae152adf-6c96-4431-9346-f95145c061f4/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.267451 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" path="/var/lib/kubelet/pods/cd64c006-aedc-47b2-8704-3b1d05879f8c/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.269142 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" path="/var/lib/kubelet/pods/d58049e2-10b0-4b6d-9a9e-d90420a1cecb/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.270230 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd942f21-0785-443e-ab04-27548ecd9207" path="/var/lib/kubelet/pods/dd942f21-0785-443e-ab04-27548ecd9207/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.270980 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" path="/var/lib/kubelet/pods/e504c0d2-5734-47b7-aa7f-4cdb2a339d41/volumes" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.279480 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.355239 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxq2j\" (UniqueName: \"kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j\") pod \"056216b9-faea-40b1-860d-eb24a8214b44\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.355337 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts\") pod \"056216b9-faea-40b1-860d-eb24a8214b44\" (UID: \"056216b9-faea-40b1-860d-eb24a8214b44\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.355953 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "056216b9-faea-40b1-860d-eb24a8214b44" (UID: "056216b9-faea-40b1-860d-eb24a8214b44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.360444 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j" (OuterVolumeSpecName: "kube-api-access-gxq2j") pod "056216b9-faea-40b1-860d-eb24a8214b44" (UID: "056216b9-faea-40b1-860d-eb24a8214b44"). InnerVolumeSpecName "kube-api-access-gxq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.445319 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ee2dda38-9821-41ec-b524-13a48badf5e9/ovn-northd/0.log" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.445405 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.458166 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxq2j\" (UniqueName: \"kubernetes.io/projected/056216b9-faea-40b1-860d-eb24a8214b44-kube-api-access-gxq2j\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.458207 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/056216b9-faea-40b1-860d-eb24a8214b44-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.481685 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.558781 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559170 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559212 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559280 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559319 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559345 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559369 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559414 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559453 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmsp\" (UniqueName: \"kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559488 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559529 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559597 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cptkv\" (UniqueName: \"kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv\") pod \"ee2dda38-9821-41ec-b524-13a48badf5e9\" (UID: \"ee2dda38-9821-41ec-b524-13a48badf5e9\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559619 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559650 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs\") pod \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\" (UID: \"09a277e9-f3d7-4499-b29b-ef8788c5e1b0\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559751 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.559827 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.560223 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config" (OuterVolumeSpecName: "config") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.560617 4813 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.560643 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.560658 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.561018 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.561154 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts" (OuterVolumeSpecName: "scripts") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.561290 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.561621 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.568270 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp" (OuterVolumeSpecName: "kube-api-access-pxmsp") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "kube-api-access-pxmsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.569852 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv" (OuterVolumeSpecName: "kube-api-access-cptkv") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "kube-api-access-cptkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.579671 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.584319 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.597362 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.606901 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "09a277e9-f3d7-4499-b29b-ef8788c5e1b0" (UID: "09a277e9-f3d7-4499-b29b-ef8788c5e1b0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.651980 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.660203 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ee2dda38-9821-41ec-b524-13a48badf5e9" (UID: "ee2dda38-9821-41ec-b524-13a48badf5e9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667042 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667127 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ee2dda38-9821-41ec-b524-13a48badf5e9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667151 4813 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667163 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667171 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ee2dda38-9821-41ec-b524-13a48badf5e9-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667182 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmsp\" (UniqueName: \"kubernetes.io/projected/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-kube-api-access-pxmsp\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667190 4813 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667198 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2dda38-9821-41ec-b524-13a48badf5e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667207 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cptkv\" (UniqueName: \"kubernetes.io/projected/ee2dda38-9821-41ec-b524-13a48badf5e9-kube-api-access-cptkv\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667215 4813 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667225 4813 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09a277e9-f3d7-4499-b29b-ef8788c5e1b0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.667253 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.676331 4813 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-757f6546f5-txkdm" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.156:5000/v3\": read tcp 10.217.0.2:40352->10.217.0.156:5000: read: connection reset by peer" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.694708 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.768500 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: E0129 17:05:56.768581 4813 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 17:05:56 crc kubenswrapper[4813]: E0129 17:05:56.768631 4813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data podName:6463fe6f-cd6d-4078-8fa2-0d167de480df nodeName:}" failed. No retries permitted until 2026-01-29 17:06:04.768615767 +0000 UTC m=+2217.255818983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data") pod "rabbitmq-server-0" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df") : configmap "rabbitmq-config-data" not found Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.789125 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.860682 4813 generic.go:334] "Generic (PLEG): container finished" podID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerID="b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f" exitCode=0 Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.860875 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerDied","Data":"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.861016 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"09a277e9-f3d7-4499-b29b-ef8788c5e1b0","Type":"ContainerDied","Data":"09ee9b7de4a162868480c4564f5104a66713a75ff63d7cc741bfe0a4f3185625"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.861051 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.861091 4813 scope.go:117] "RemoveContainer" containerID="b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.869343 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4h6n\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.869424 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.869692 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.869754 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870022 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870175 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870208 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.870269 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.871234 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf\") pod \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\" (UID: \"bda951f8-8354-4ca3-be9e-f92f6fea40cc\") " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.872050 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.873082 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.874031 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.875409 4813 generic.go:334] "Generic (PLEG): container finished" podID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerID="a6111d6afee3055cda0c53ca25e0f552bbaa1c12f35b905627d640bdc35dfedb" exitCode=0 Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.875547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerDied","Data":"a6111d6afee3055cda0c53ca25e0f552bbaa1c12f35b905627d640bdc35dfedb"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.878865 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info" (OuterVolumeSpecName: "pod-info") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.885514 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.885639 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.885919 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n" (OuterVolumeSpecName: "kube-api-access-s4h6n") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "kube-api-access-s4h6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.889081 4813 generic.go:334] "Generic (PLEG): container finished" podID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerID="4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269" exitCode=0 Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.889266 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerDied","Data":"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.889404 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bda951f8-8354-4ca3-be9e-f92f6fea40cc","Type":"ContainerDied","Data":"e8a7e41dd3cdfd13086c1f136cc2cb4d23fc760fea21525032e09185bbb6dac1"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.889489 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.891501 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.895470 4813 generic.go:334] "Generic (PLEG): container finished" podID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerID="f6218d0edf63e4911af52e8755d9d930f8356969bd5a331cd99184e89398221d" exitCode=0 Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.895531 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757f6546f5-txkdm" event={"ID":"5d04685f-adbf-45b2-a649-34f27aadc2b1","Type":"ContainerDied","Data":"f6218d0edf63e4911af52e8755d9d930f8356969bd5a331cd99184e89398221d"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.896425 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data" (OuterVolumeSpecName: "config-data") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.898795 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ee2dda38-9821-41ec-b524-13a48badf5e9/ovn-northd/0.log" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.898931 4813 generic.go:334] "Generic (PLEG): container finished" podID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" exitCode=139 Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.899004 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.899021 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerDied","Data":"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.899054 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ee2dda38-9821-41ec-b524-13a48badf5e9","Type":"ContainerDied","Data":"3a7bc22fb530e3f11ed2b8c1d4f78e9138dcdf8b0a7c5fe11ebca00bcceaa80e"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.900794 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9bn4b" event={"ID":"056216b9-faea-40b1-860d-eb24a8214b44","Type":"ContainerDied","Data":"636d053af94fd9cf8a374e5ecee74653dd1ebeac09b284f9af57ff34a4e9ec20"} Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.900862 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9bn4b" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.946298 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf" (OuterVolumeSpecName: "server-conf") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.972907 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.972971 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.972984 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.972997 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973007 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bda951f8-8354-4ca3-be9e-f92f6fea40cc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973018 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bda951f8-8354-4ca3-be9e-f92f6fea40cc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973031 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973042 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973052 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bda951f8-8354-4ca3-be9e-f92f6fea40cc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.973063 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4h6n\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-kube-api-access-s4h6n\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.989307 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 29 17:05:56 crc kubenswrapper[4813]: I0129 17:05:56.992860 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bda951f8-8354-4ca3-be9e-f92f6fea40cc" (UID: "bda951f8-8354-4ca3-be9e-f92f6fea40cc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.026692 4813 scope.go:117] "RemoveContainer" containerID="a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.070883 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.077818 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bda951f8-8354-4ca3-be9e-f92f6fea40cc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.077844 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.080323 4813 scope.go:117] "RemoveContainer" containerID="b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.086775 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f\": container with ID starting with b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f not found: ID does not exist" containerID="b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.086828 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f"} err="failed to get container status \"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f\": rpc error: code = NotFound desc = could not find container \"b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f\": container with ID starting with b1f10c3b5efd4f30af3601627c12e4cb566a099067f5fa07a3aaf0984bb8294f not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.086864 4813 scope.go:117] "RemoveContainer" containerID="a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.090416 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9bn4b"] Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.091943 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0\": container with ID starting with a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0 not found: ID does not exist" containerID="a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.091998 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0"} err="failed to get container status \"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0\": rpc error: code = NotFound desc = could not find container \"a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0\": container with ID starting with a50ddaf6db557eb33790d7b9b98102a6897593075e71ccc5b787bf943f4850b0 not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.092030 4813 scope.go:117] "RemoveContainer" containerID="4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.105014 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.112652 4813 scope.go:117] "RemoveContainer" containerID="7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.116141 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.135259 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.135971 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.138370 4813 scope.go:117] "RemoveContainer" containerID="4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.138709 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269\": container with ID starting with 4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269 not found: ID does not exist" containerID="4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.138737 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269"} err="failed to get container status \"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269\": rpc error: code = NotFound desc = could not find container \"4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269\": container with ID starting with 4548f56c687ea39dad5282fdf9ff22a67cb3581c187e7606dbf2676651cba269 not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.138760 4813 scope.go:117] "RemoveContainer" containerID="7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.138944 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4\": container with ID starting with 7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4 not found: ID does not exist" containerID="7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.138967 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4"} err="failed to get container status \"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4\": rpc error: code = NotFound desc = could not find container \"7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4\": container with ID starting with 7f4c95f1c50dd995797c6166376f0a6adc4f18d770985d1c5f14490d603feaf4 not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.138980 4813 scope.go:117] "RemoveContainer" containerID="fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.156528 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.157553 4813 scope.go:117] "RemoveContainer" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179536 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179575 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179601 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77wdq\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179643 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179718 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179741 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179760 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179829 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179847 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179881 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.179901 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf\") pod \"6463fe6f-cd6d-4078-8fa2-0d167de480df\" (UID: \"6463fe6f-cd6d-4078-8fa2-0d167de480df\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.186884 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.188534 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info" (OuterVolumeSpecName: "pod-info") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.189144 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.189716 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.190845 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.192672 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq" (OuterVolumeSpecName: "kube-api-access-77wdq") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "kube-api-access-77wdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.195251 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.199180 4813 scope.go:117] "RemoveContainer" containerID="fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.199634 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64\": container with ID starting with fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64 not found: ID does not exist" containerID="fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.199663 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64"} err="failed to get container status \"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64\": rpc error: code = NotFound desc = could not find container \"fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64\": container with ID starting with fc2de6297131eb0dec48d165c7f9a149e75d6a91bfa44c9757f626a6b2f03b64 not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.199684 4813 scope.go:117] "RemoveContainer" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.200140 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c\": container with ID starting with 9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c not found: ID does not exist" containerID="9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.200173 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c"} err="failed to get container status \"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c\": rpc error: code = NotFound desc = could not find container \"9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c\": container with ID starting with 9747f239cdea94579208c96ae6112054acf8990eac2ceb2e7ca4ece080c84f7c not found: ID does not exist" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.203329 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.239065 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.257861 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.258612 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data" (OuterVolumeSpecName: "config-data") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.259587 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.274636 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf" (OuterVolumeSpecName: "server-conf") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294426 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294460 4813 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6463fe6f-cd6d-4078-8fa2-0d167de480df-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294470 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294479 4813 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294490 4813 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6463fe6f-cd6d-4078-8fa2-0d167de480df-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294500 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294508 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77wdq\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-kube-api-access-77wdq\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294515 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294523 4813 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6463fe6f-cd6d-4078-8fa2-0d167de480df-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.294554 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.354878 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.381657 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6463fe6f-cd6d-4078-8fa2-0d167de480df" (UID: "6463fe6f-cd6d-4078-8fa2-0d167de480df"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.395981 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wskzk\" (UniqueName: \"kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396037 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396079 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396103 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396206 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396240 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396303 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs\") pod \"5d04685f-adbf-45b2-a649-34f27aadc2b1\" (UID: \"5d04685f-adbf-45b2-a649-34f27aadc2b1\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396681 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.396701 4813 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6463fe6f-cd6d-4078-8fa2-0d167de480df-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.399581 4813 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 17:05:57 crc kubenswrapper[4813]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T17:05:50Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 17:05:57 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 1107 Alarm clock "$@" Jan 29 17:05:57 crc kubenswrapper[4813]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-cmsdz" message=< Jan 29 17:05:57 crc kubenswrapper[4813]: Exiting ovn-controller (1) [FAILED] Jan 29 17:05:57 crc kubenswrapper[4813]: Killing ovn-controller (1) [ OK ] Jan 29 17:05:57 crc kubenswrapper[4813]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 29 17:05:57 crc kubenswrapper[4813]: 2026-01-29T17:05:50Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 17:05:57 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 1107 Alarm clock "$@" Jan 29 17:05:57 crc kubenswrapper[4813]: > Jan 29 17:05:57 crc kubenswrapper[4813]: E0129 17:05:57.399616 4813 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 17:05:57 crc kubenswrapper[4813]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T17:05:50Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 17:05:57 crc kubenswrapper[4813]: /etc/init.d/functions: line 589: 1107 Alarm clock "$@" Jan 29 17:05:57 crc kubenswrapper[4813]: > pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" containerID="cri-o://898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.399650 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-cmsdz" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" containerID="cri-o://898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" gracePeriod=22 Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.420645 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk" (OuterVolumeSpecName: "kube-api-access-wskzk") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "kube-api-access-wskzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.430145 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.430311 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.432360 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts" (OuterVolumeSpecName: "scripts") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.449080 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.453764 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data" (OuterVolumeSpecName: "config-data") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.495419 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498764 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498788 4813 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498800 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498809 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498817 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wskzk\" (UniqueName: \"kubernetes.io/projected/5d04685f-adbf-45b2-a649-34f27aadc2b1-kube-api-access-wskzk\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498827 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.498835 4813 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.502721 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5d04685f-adbf-45b2-a649-34f27aadc2b1" (UID: "5d04685f-adbf-45b2-a649-34f27aadc2b1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.600696 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04685f-adbf-45b2-a649-34f27aadc2b1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.736072 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cmsdz_98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b/ovn-controller/0.log" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.736181 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803324 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803413 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803483 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803504 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803518 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803545 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2znxt\" (UniqueName: \"kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.803616 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs\") pod \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\" (UID: \"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b\") " Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.804249 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.804325 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run" (OuterVolumeSpecName: "var-run") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.804980 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.805471 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts" (OuterVolumeSpecName: "scripts") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.822688 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.822777 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt" (OuterVolumeSpecName: "kube-api-access-2znxt") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "kube-api-access-2znxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.869836 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" (UID: "98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905043 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905084 4813 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905100 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2znxt\" (UniqueName: \"kubernetes.io/projected/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-kube-api-access-2znxt\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905127 4813 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905138 4813 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905149 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.905159 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.923636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-757f6546f5-txkdm" event={"ID":"5d04685f-adbf-45b2-a649-34f27aadc2b1","Type":"ContainerDied","Data":"7d1f61b8143da518eae9e93ab030fab9ebf2d458513e0564d13411c261f72353"} Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.923704 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-757f6546f5-txkdm" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.923711 4813 scope.go:117] "RemoveContainer" containerID="f6218d0edf63e4911af52e8755d9d930f8356969bd5a331cd99184e89398221d" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.927917 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cmsdz_98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b/ovn-controller/0.log" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.927974 4813 generic.go:334] "Generic (PLEG): container finished" podID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerID="898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" exitCode=137 Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.928044 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz" event={"ID":"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b","Type":"ContainerDied","Data":"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806"} Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.928076 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cmsdz" event={"ID":"98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b","Type":"ContainerDied","Data":"e65b4b7935d5bd7e34c9faff8ad9f97580ad68a97b86ef16e9ab5b1b1fbffd05"} Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.928159 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cmsdz" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.943190 4813 generic.go:334] "Generic (PLEG): container finished" podID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" exitCode=0 Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.943252 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7baf1369-6dba-465b-aa2f-f518ae175d80","Type":"ContainerDied","Data":"631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593"} Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.947230 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.947253 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6463fe6f-cd6d-4078-8fa2-0d167de480df","Type":"ContainerDied","Data":"a400750b7cc69e7d79af657f702de65ef2a4874acf812f06fbd3aac6f351109f"} Jan 29 17:05:57 crc kubenswrapper[4813]: I0129 17:05:57.966136 4813 scope.go:117] "RemoveContainer" containerID="898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.002855 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.017288 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-757f6546f5-txkdm"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.019245 4813 scope.go:117] "RemoveContainer" containerID="898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.020106 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806\": container with ID starting with 898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806 not found: ID does not exist" containerID="898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.020159 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806"} err="failed to get container status \"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806\": rpc error: code = NotFound desc = could not find container \"898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806\": container with ID starting with 898bb96ccf5dc99cc9459f6224913b8e4f38b05a12646fbe753ae002671fe806 not found: ID does not exist" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.020193 4813 scope.go:117] "RemoveContainer" containerID="a6111d6afee3055cda0c53ca25e0f552bbaa1c12f35b905627d640bdc35dfedb" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.030720 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.038675 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cmsdz"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.044304 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.046603 4813 scope.go:117] "RemoveContainer" containerID="53ac6aa3a537c0b6fd153a13fe50bef12959cc535873b4b51c03e2ead8c60e6c" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.049348 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.251722 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056216b9-faea-40b1-860d-eb24a8214b44" path="/var/lib/kubelet/pods/056216b9-faea-40b1-860d-eb24a8214b44/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.252322 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" path="/var/lib/kubelet/pods/09a277e9-f3d7-4499-b29b-ef8788c5e1b0/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.252911 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" path="/var/lib/kubelet/pods/5d04685f-adbf-45b2-a649-34f27aadc2b1/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.254093 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" path="/var/lib/kubelet/pods/6463fe6f-cd6d-4078-8fa2-0d167de480df/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.254698 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" path="/var/lib/kubelet/pods/98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.255336 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" path="/var/lib/kubelet/pods/bda951f8-8354-4ca3-be9e-f92f6fea40cc/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.256229 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.256416 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" path="/var/lib/kubelet/pods/ee2dda38-9821-41ec-b524-13a48badf5e9/volumes" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.310256 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knhz5\" (UniqueName: \"kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5\") pod \"7baf1369-6dba-465b-aa2f-f518ae175d80\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.310316 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle\") pod \"7baf1369-6dba-465b-aa2f-f518ae175d80\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.310351 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data\") pod \"7baf1369-6dba-465b-aa2f-f518ae175d80\" (UID: \"7baf1369-6dba-465b-aa2f-f518ae175d80\") " Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.315001 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5" (OuterVolumeSpecName: "kube-api-access-knhz5") pod "7baf1369-6dba-465b-aa2f-f518ae175d80" (UID: "7baf1369-6dba-465b-aa2f-f518ae175d80"). InnerVolumeSpecName "kube-api-access-knhz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.335326 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data" (OuterVolumeSpecName: "config-data") pod "7baf1369-6dba-465b-aa2f-f518ae175d80" (UID: "7baf1369-6dba-465b-aa2f-f518ae175d80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.340103 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7baf1369-6dba-465b-aa2f-f518ae175d80" (UID: "7baf1369-6dba-465b-aa2f-f518ae175d80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.412787 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knhz5\" (UniqueName: \"kubernetes.io/projected/7baf1369-6dba-465b-aa2f-f518ae175d80-kube-api-access-knhz5\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.413060 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.413071 4813 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7baf1369-6dba-465b-aa2f-f518ae175d80-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.438891 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439409 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439432 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-api" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439445 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439452 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439467 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="openstack-network-exporter" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439474 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="openstack-network-exporter" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439486 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439492 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439500 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439506 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439520 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="mysql-bootstrap" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439526 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="mysql-bootstrap" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439533 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="galera" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439538 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="galera" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439550 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-notification-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439557 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-notification-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439570 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439577 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-api" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439587 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="sg-core" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439593 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="sg-core" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439603 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-central-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439609 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-central-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439620 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439625 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439634 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="setup-container" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439640 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="setup-container" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439648 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439655 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439663 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" containerName="memcached" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439670 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" containerName="memcached" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439679 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439686 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439695 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439700 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439707 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="cinder-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439713 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="cinder-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439721 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439726 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439736 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="setup-container" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439743 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="setup-container" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439754 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd38907-4473-487b-8f79-85baaca96f00" containerName="nova-scheduler-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439759 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd38907-4473-487b-8f79-85baaca96f00" containerName="nova-scheduler-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439768 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439774 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439782 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="probe" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439788 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="probe" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439795 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" containerName="kube-state-metrics" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439802 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" containerName="kube-state-metrics" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439813 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerName="keystone-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439819 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerName="keystone-api" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439831 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439838 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439852 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439859 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439871 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="ovn-northd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439877 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="ovn-northd" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439891 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439898 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439908 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439915 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439925 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="proxy-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439931 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="proxy-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439942 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439948 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439956 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerName="nova-cell0-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439962 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerName="nova-cell0-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439971 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.439978 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: E0129 17:05:58.439994 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440001 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440149 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" containerName="nova-cell1-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440159 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="proxy-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440170 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="sg-core" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440182 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440191 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440201 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440215 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd38907-4473-487b-8f79-85baaca96f00" containerName="nova-scheduler-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440227 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6471b922-d5d2-4a43-ab3a-fbe69b43beb9" containerName="kube-state-metrics" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440235 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440246 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="cinder-scheduler" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440257 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bbe82b-cffe-4d8b-ac7d-55507916528e" containerName="nova-cell0-conductor-conductor" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440269 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd942f21-0785-443e-ab04-27548ecd9207" containerName="barbican-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440278 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b0c262-ff9b-4f8f-b18f-71e6b1f7b80b" containerName="ovn-controller" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440288 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="ovn-northd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440297 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a277e9-f3d7-4499-b29b-ef8788c5e1b0" containerName="galera" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440304 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d04685f-adbf-45b2-a649-34f27aadc2b1" containerName="keystone-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440315 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6067c320-64f2-4c71-b4b0-bd136749200f" containerName="memcached" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440325 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd64c006-aedc-47b2-8704-3b1d05879f8c" containerName="cinder-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440335 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="296a369a-d653-4033-9a10-7077b62875f1" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440340 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440350 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440355 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="d58049e2-10b0-4b6d-9a9e-d90420a1cecb" containerName="nova-api-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440364 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda951f8-8354-4ca3-be9e-f92f6fea40cc" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440372 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="620cdd0f-d89f-4197-90e1-d1f17f4fd7f7" containerName="probe" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440382 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8892a0d0-88f2-4e0a-aafb-20e04d9e6289" containerName="nova-metadata-metadata" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440388 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-api" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440394 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="494a1163-f584-43fa-9224-825be2a90c27" containerName="glance-httpd" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440401 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-notification-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440408 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2dda38-9821-41ec-b524-13a48badf5e9" containerName="openstack-network-exporter" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440418 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a75d1-c71b-4af5-9fe8-83b53b60b86e" containerName="ceilometer-central-agent" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440425 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e504c0d2-5734-47b7-aa7f-4cdb2a339d41" containerName="placement-log" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.440433 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6463fe6f-cd6d-4078-8fa2-0d167de480df" containerName="rabbitmq" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.441569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.451340 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.514119 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xm2\" (UniqueName: \"kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.514183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.514369 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.616138 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.616232 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xm2\" (UniqueName: \"kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.616260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.616746 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.616943 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.635235 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xm2\" (UniqueName: \"kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2\") pod \"redhat-marketplace-j892q\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.773720 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.974103 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7baf1369-6dba-465b-aa2f-f518ae175d80","Type":"ContainerDied","Data":"2920f276e0fe290bc673f2a5eaec6d057de31e08bb0f8eeba63d92add116587b"} Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.974178 4813 scope.go:117] "RemoveContainer" containerID="631fd5cdfae1ad3097ce3f137c586acd7e21b58be9ebbc7d10be5ab69aaa0593" Jan 29 17:05:58 crc kubenswrapper[4813]: I0129 17:05:58.974298 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 17:05:59 crc kubenswrapper[4813]: I0129 17:05:59.015705 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:05:59 crc kubenswrapper[4813]: I0129 17:05:59.027540 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 17:05:59 crc kubenswrapper[4813]: I0129 17:05:59.258362 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:05:59 crc kubenswrapper[4813]: I0129 17:05:59.999696 4813 generic.go:334] "Generic (PLEG): container finished" podID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerID="8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f" exitCode=0 Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:05:59.999752 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerDied","Data":"8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f"} Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:05:59.999855 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerStarted","Data":"8c18e8cad9c3b01900fba57e2be003bf45b177c2e3e3e6dbf782c09e00b4d78d"} Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.241426 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.241758 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.252347 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7baf1369-6dba-465b-aa2f-f518ae175d80" path="/var/lib/kubelet/pods/7baf1369-6dba-465b-aa2f-f518ae175d80/volumes" Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.253202 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.253599 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:06:00 crc kubenswrapper[4813]: I0129 17:06:00.253742 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" gracePeriod=600 Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.278218 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.278689 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.282584 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.282713 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.282741 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.284865 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.287840 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.287930 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:00 crc kubenswrapper[4813]: E0129 17:06:00.383218 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:01 crc kubenswrapper[4813]: I0129 17:06:01.011574 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerStarted","Data":"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578"} Jan 29 17:06:01 crc kubenswrapper[4813]: I0129 17:06:01.013470 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" exitCode=0 Jan 29 17:06:01 crc kubenswrapper[4813]: I0129 17:06:01.013521 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727"} Jan 29 17:06:01 crc kubenswrapper[4813]: I0129 17:06:01.013560 4813 scope.go:117] "RemoveContainer" containerID="c67bf6fa310210448a647da62a3e9c7bcafe61094dc8679ad60bf50059882bd5" Jan 29 17:06:01 crc kubenswrapper[4813]: I0129 17:06:01.014210 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:06:01 crc kubenswrapper[4813]: E0129 17:06:01.014503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:02 crc kubenswrapper[4813]: I0129 17:06:02.029883 4813 generic.go:334] "Generic (PLEG): container finished" podID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerID="0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578" exitCode=0 Jan 29 17:06:02 crc kubenswrapper[4813]: I0129 17:06:02.029921 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerDied","Data":"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578"} Jan 29 17:06:03 crc kubenswrapper[4813]: I0129 17:06:03.063104 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerStarted","Data":"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442"} Jan 29 17:06:03 crc kubenswrapper[4813]: I0129 17:06:03.086606 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j892q" podStartSLOduration=2.554367764 podStartE2EDuration="5.086587492s" podCreationTimestamp="2026-01-29 17:05:58 +0000 UTC" firstStartedPulling="2026-01-29 17:06:00.001830035 +0000 UTC m=+2212.489033251" lastFinishedPulling="2026-01-29 17:06:02.534049763 +0000 UTC m=+2215.021252979" observedRunningTime="2026-01-29 17:06:03.081788996 +0000 UTC m=+2215.568992222" watchObservedRunningTime="2026-01-29 17:06:03.086587492 +0000 UTC m=+2215.573790708" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.078949 4813 generic.go:334] "Generic (PLEG): container finished" podID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerID="0e1eabbbaa63c5764c44a582ebc5008e28c901ea880bd7fdcfd278bd736cbcb5" exitCode=0 Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.079029 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerDied","Data":"0e1eabbbaa63c5764c44a582ebc5008e28c901ea880bd7fdcfd278bd736cbcb5"} Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.275668 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.276128 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.277288 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.277351 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.277384 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.278576 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.279869 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:05 crc kubenswrapper[4813]: E0129 17:06:05.279942 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.740592 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-886657b65-p552j" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833180 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fkzg\" (UniqueName: \"kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833523 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833573 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833614 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833639 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833715 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.833786 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config\") pod \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\" (UID: \"8fa06e8d-be9e-4451-8387-d3ec49dd8306\") " Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.839315 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.839364 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg" (OuterVolumeSpecName: "kube-api-access-6fkzg") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "kube-api-access-6fkzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.874222 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config" (OuterVolumeSpecName: "config") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.877287 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.892518 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.896928 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.898297 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8fa06e8d-be9e-4451-8387-d3ec49dd8306" (UID: "8fa06e8d-be9e-4451-8387-d3ec49dd8306"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935299 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935465 4813 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935534 4813 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935614 4813 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935680 4813 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-config\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935746 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fkzg\" (UniqueName: \"kubernetes.io/projected/8fa06e8d-be9e-4451-8387-d3ec49dd8306-kube-api-access-6fkzg\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:05 crc kubenswrapper[4813]: I0129 17:06:05.935817 4813 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fa06e8d-be9e-4451-8387-d3ec49dd8306-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.090208 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-886657b65-p552j" event={"ID":"8fa06e8d-be9e-4451-8387-d3ec49dd8306","Type":"ContainerDied","Data":"65c57bc46dfce2ad6cad7709d3a9484b4422901b0fa3eb57e5fd4ac8b3114c65"} Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.090278 4813 scope.go:117] "RemoveContainer" containerID="5aa28b5649ace6006edcf7ca887bd1062f191ab08030423dc4e8ee65bc52dfff" Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.090295 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-886657b65-p552j" Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.112088 4813 scope.go:117] "RemoveContainer" containerID="0e1eabbbaa63c5764c44a582ebc5008e28c901ea880bd7fdcfd278bd736cbcb5" Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.132545 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.139492 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-886657b65-p552j"] Jan 29 17:06:06 crc kubenswrapper[4813]: I0129 17:06:06.265259 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" path="/var/lib/kubelet/pods/8fa06e8d-be9e-4451-8387-d3ec49dd8306/volumes" Jan 29 17:06:08 crc kubenswrapper[4813]: I0129 17:06:08.774540 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:08 crc kubenswrapper[4813]: I0129 17:06:08.774589 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:08 crc kubenswrapper[4813]: I0129 17:06:08.819455 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:09 crc kubenswrapper[4813]: I0129 17:06:09.149574 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:09 crc kubenswrapper[4813]: I0129 17:06:09.190082 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.276250 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.277020 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.277361 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.277399 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.280033 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.282865 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.284574 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:10 crc kubenswrapper[4813]: E0129 17:06:10.284619 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.132603 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j892q" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="registry-server" containerID="cri-o://5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442" gracePeriod=2 Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.604183 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.717726 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities\") pod \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.717790 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content\") pod \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.717891 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5xm2\" (UniqueName: \"kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2\") pod \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\" (UID: \"048662bc-eb7a-4a08-b4ec-52905bcf6f47\") " Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.719381 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities" (OuterVolumeSpecName: "utilities") pod "048662bc-eb7a-4a08-b4ec-52905bcf6f47" (UID: "048662bc-eb7a-4a08-b4ec-52905bcf6f47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.724449 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2" (OuterVolumeSpecName: "kube-api-access-z5xm2") pod "048662bc-eb7a-4a08-b4ec-52905bcf6f47" (UID: "048662bc-eb7a-4a08-b4ec-52905bcf6f47"). InnerVolumeSpecName "kube-api-access-z5xm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.745560 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "048662bc-eb7a-4a08-b4ec-52905bcf6f47" (UID: "048662bc-eb7a-4a08-b4ec-52905bcf6f47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.819653 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.819727 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/048662bc-eb7a-4a08-b4ec-52905bcf6f47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:11 crc kubenswrapper[4813]: I0129 17:06:11.819746 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5xm2\" (UniqueName: \"kubernetes.io/projected/048662bc-eb7a-4a08-b4ec-52905bcf6f47-kube-api-access-z5xm2\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.147706 4813 generic.go:334] "Generic (PLEG): container finished" podID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerID="5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442" exitCode=0 Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.147749 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerDied","Data":"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442"} Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.147752 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j892q" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.147777 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j892q" event={"ID":"048662bc-eb7a-4a08-b4ec-52905bcf6f47","Type":"ContainerDied","Data":"8c18e8cad9c3b01900fba57e2be003bf45b177c2e3e3e6dbf782c09e00b4d78d"} Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.147797 4813 scope.go:117] "RemoveContainer" containerID="5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.178753 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.183621 4813 scope.go:117] "RemoveContainer" containerID="0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.184625 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j892q"] Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.199276 4813 scope.go:117] "RemoveContainer" containerID="8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.234182 4813 scope.go:117] "RemoveContainer" containerID="5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442" Jan 29 17:06:12 crc kubenswrapper[4813]: E0129 17:06:12.234537 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442\": container with ID starting with 5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442 not found: ID does not exist" containerID="5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.234572 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442"} err="failed to get container status \"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442\": rpc error: code = NotFound desc = could not find container \"5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442\": container with ID starting with 5f32f9872eaff4166ce16aa424559b959d9cc2402a631199ad9bf41855a52442 not found: ID does not exist" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.234605 4813 scope.go:117] "RemoveContainer" containerID="0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578" Jan 29 17:06:12 crc kubenswrapper[4813]: E0129 17:06:12.234912 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578\": container with ID starting with 0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578 not found: ID does not exist" containerID="0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.234946 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578"} err="failed to get container status \"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578\": rpc error: code = NotFound desc = could not find container \"0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578\": container with ID starting with 0eb05935b4cba74213fed1efd1919823bf29aff90d434e73aaab65ee13916578 not found: ID does not exist" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.234965 4813 scope.go:117] "RemoveContainer" containerID="8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f" Jan 29 17:06:12 crc kubenswrapper[4813]: E0129 17:06:12.235234 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f\": container with ID starting with 8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f not found: ID does not exist" containerID="8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.235258 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f"} err="failed to get container status \"8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f\": rpc error: code = NotFound desc = could not find container \"8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f\": container with ID starting with 8877e5a4e2cd90d8fc06161ab985d5b1b049a90573132590ae8abb447115676f not found: ID does not exist" Jan 29 17:06:12 crc kubenswrapper[4813]: I0129 17:06:12.248311 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" path="/var/lib/kubelet/pods/048662bc-eb7a-4a08-b4ec-52905bcf6f47/volumes" Jan 29 17:06:13 crc kubenswrapper[4813]: I0129 17:06:13.239691 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:06:13 crc kubenswrapper[4813]: E0129 17:06:13.240058 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.276018 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.276817 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.277293 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.277335 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.278466 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.280592 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.281691 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:15 crc kubenswrapper[4813]: E0129 17:06:15.281731 4813 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.213795 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xqdpz_6aa0a9e1-c775-4e0b-8286-ef272885c653/ovs-vswitchd/0.log" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.215185 4813 generic.go:334] "Generic (PLEG): container finished" podID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" exitCode=137 Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.215264 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerDied","Data":"2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f"} Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.222024 4813 generic.go:334] "Generic (PLEG): container finished" podID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerID="3e0e0ef163807a59b2a6a361cb8f9f50111fa4482f203c2cb3078cc29529bdee" exitCode=137 Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.222071 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"3e0e0ef163807a59b2a6a361cb8f9f50111fa4482f203c2cb3078cc29529bdee"} Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.278646 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f is running failed: container process not found" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.279053 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f is running failed: container process not found" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.279346 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f is running failed: container process not found" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.279373 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.281436 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.281719 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.282018 4813 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 17:06:20 crc kubenswrapper[4813]: E0129 17:06:20.282058 4813 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-xqdpz" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.551643 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xqdpz_6aa0a9e1-c775-4e0b-8286-ef272885c653/ovs-vswitchd/0.log" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.552487 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642550 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642637 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642693 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l92bf\" (UniqueName: \"kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642722 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642780 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.642816 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log\") pod \"6aa0a9e1-c775-4e0b-8286-ef272885c653\" (UID: \"6aa0a9e1-c775-4e0b-8286-ef272885c653\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.643199 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log" (OuterVolumeSpecName: "var-log") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.643237 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run" (OuterVolumeSpecName: "var-run") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.644661 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts" (OuterVolumeSpecName: "scripts") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.645353 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib" (OuterVolumeSpecName: "var-lib") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.645319 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.656232 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf" (OuterVolumeSpecName: "kube-api-access-l92bf") pod "6aa0a9e1-c775-4e0b-8286-ef272885c653" (UID: "6aa0a9e1-c775-4e0b-8286-ef272885c653"). InnerVolumeSpecName "kube-api-access-l92bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744522 4813 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6aa0a9e1-c775-4e0b-8286-ef272885c653-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744567 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l92bf\" (UniqueName: \"kubernetes.io/projected/6aa0a9e1-c775-4e0b-8286-ef272885c653-kube-api-access-l92bf\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744581 4813 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744592 4813 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744602 4813 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.744611 4813 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6aa0a9e1-c775-4e0b-8286-ef272885c653-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.771655 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846066 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846167 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846214 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846237 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846261 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846331 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68n2w\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w\") pod \"f2aa8580-8c90-4607-a906-c039e1e4c111\" (UID: \"f2aa8580-8c90-4607-a906-c039e1e4c111\") " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.846930 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock" (OuterVolumeSpecName: "lock") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.847032 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache" (OuterVolumeSpecName: "cache") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.847182 4813 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-cache\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.847196 4813 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2aa8580-8c90-4607-a906-c039e1e4c111-lock\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.850170 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.850525 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w" (OuterVolumeSpecName: "kube-api-access-68n2w") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "kube-api-access-68n2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.850597 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.948243 4813 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.948313 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68n2w\" (UniqueName: \"kubernetes.io/projected/f2aa8580-8c90-4607-a906-c039e1e4c111-kube-api-access-68n2w\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.948350 4813 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 17:06:20 crc kubenswrapper[4813]: I0129 17:06:20.963857 4813 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.049778 4813 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.058132 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2aa8580-8c90-4607-a906-c039e1e4c111" (UID: "f2aa8580-8c90-4607-a906-c039e1e4c111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.151773 4813 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2aa8580-8c90-4607-a906-c039e1e4c111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.231101 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xqdpz_6aa0a9e1-c775-4e0b-8286-ef272885c653/ovs-vswitchd/0.log" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.233196 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xqdpz" event={"ID":"6aa0a9e1-c775-4e0b-8286-ef272885c653","Type":"ContainerDied","Data":"a6333d3ed43e3aa222e6d7bd7f69cc9ca548e928b7bce9769a46f86a8b0701a5"} Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.233245 4813 scope.go:117] "RemoveContainer" containerID="2575d8621f60d8da7e3695cbbd2c3f194dec037de0736cf9e9c9764f1b4a825f" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.233644 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xqdpz" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.242484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f2aa8580-8c90-4607-a906-c039e1e4c111","Type":"ContainerDied","Data":"b692aee5352815bccee628d6bc7b15a8275f3ba4405ca90ce7e6202a6c7691b6"} Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.242592 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.270243 4813 scope.go:117] "RemoveContainer" containerID="77ed1d084e306ba42c9e093ddf160adf007fe00870b03ed5a224b45e12225ab4" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.285584 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.297217 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.300184 4813 scope.go:117] "RemoveContainer" containerID="fd5b50e1141f53e3f4146715f3fa19672e6c3c63fb01fb8ae6321eeeb23d77a9" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.301125 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.305581 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-xqdpz"] Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.318969 4813 scope.go:117] "RemoveContainer" containerID="3e0e0ef163807a59b2a6a361cb8f9f50111fa4482f203c2cb3078cc29529bdee" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.357850 4813 scope.go:117] "RemoveContainer" containerID="0cf8a700e82655607fac9d5a91b1fe7324cf5f6a7f959b303b9cc0b73826bb23" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.403980 4813 scope.go:117] "RemoveContainer" containerID="0912400cbaf3d64dac1bbb048041811be489912643b2b55d8cdd00b6904d5618" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.439427 4813 scope.go:117] "RemoveContainer" containerID="974ef80b14499b0e09dc5f020a2ac326c98830e1859b4884bb3296cb6d847ee5" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.462019 4813 scope.go:117] "RemoveContainer" containerID="2cc8dee52a522946f284add3268d6d09db41b3c578932c60a72ffe29a472f072" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.478257 4813 scope.go:117] "RemoveContainer" containerID="85396d22d1bacb74de0dc15e3d6568fd0dd75a24110e76865a22e9698f5e03d7" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.513406 4813 scope.go:117] "RemoveContainer" containerID="18d667fd3f825a0180d5ee212d62dc003f4d18f1f616c39664709f8fd3e22de4" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.568045 4813 scope.go:117] "RemoveContainer" containerID="57a4c1b42a22a49c5352631c53b0536257db9090a489fe6d5ac1fae934fe7b10" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.588572 4813 scope.go:117] "RemoveContainer" containerID="8300f3dbc427b40576635dc7f78e11d14f9f9f09cfe4720072bd30ea18c40a96" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.611032 4813 scope.go:117] "RemoveContainer" containerID="e9e8a5f597dfe66901c775ed9f99ffb45cc88a323355d8c8e377c086ac43eb70" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.627554 4813 scope.go:117] "RemoveContainer" containerID="6b315628dc01734b17c3bfcb292a1a9b22d94c5ec378c68879f38fea37cae951" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.647696 4813 scope.go:117] "RemoveContainer" containerID="34a647053be6be5646ecc99401b474db714d929ce717508cb1f765ce77b6559c" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.672300 4813 scope.go:117] "RemoveContainer" containerID="632060180af065e41041ad9ffff2395b8df04b6c43e12683432988737705e770" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.691596 4813 scope.go:117] "RemoveContainer" containerID="c9a12f754af1b256a78355b7b3a3e4f909d6a20d352cb12126f67304e678fc2f" Jan 29 17:06:21 crc kubenswrapper[4813]: I0129 17:06:21.738082 4813 scope.go:117] "RemoveContainer" containerID="357fa21995192338eea4cf618d4dd037f9d9c53370af078711f773adf08dcd27" Jan 29 17:06:22 crc kubenswrapper[4813]: I0129 17:06:22.249784 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" path="/var/lib/kubelet/pods/6aa0a9e1-c775-4e0b-8286-ef272885c653/volumes" Jan 29 17:06:22 crc kubenswrapper[4813]: I0129 17:06:22.250465 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" path="/var/lib/kubelet/pods/f2aa8580-8c90-4607-a906-c039e1e4c111/volumes" Jan 29 17:06:25 crc kubenswrapper[4813]: I0129 17:06:25.239810 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:06:25 crc kubenswrapper[4813]: E0129 17:06:25.240530 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.784952 4813 scope.go:117] "RemoveContainer" containerID="57f41b96c4e3c37aa561663bfe83546f5132f6cf34321eb2021fd30cbc286d2c" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.812088 4813 scope.go:117] "RemoveContainer" containerID="f0e8e259d772767b69077be1ae02c8f19b4de52eb9c8b7a421a2f46277dd388d" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.853691 4813 scope.go:117] "RemoveContainer" containerID="6d4a5389f11fece56842c71ac840626c40db5ade9fd41e6524977fd93d84f59b" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.891739 4813 scope.go:117] "RemoveContainer" containerID="9180461d3ca1b31da8c2b5ffd77c0afcc94c78f3c213b2a804f76cf6152efa84" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.912041 4813 scope.go:117] "RemoveContainer" containerID="8f7c8d0a6c71b5e838911574aca3ddbbb7a206178ccb747bdc1bf3c3d3aae782" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.948106 4813 scope.go:117] "RemoveContainer" containerID="d1421a6327e7e180010e12d8787ac24c190372a9071611b0e1315d40e3b4fb96" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.965152 4813 scope.go:117] "RemoveContainer" containerID="405b32419ba7c2468a5c44f5a8ea0559199efb36440983f8204fe5b78e3d66c2" Jan 29 17:06:26 crc kubenswrapper[4813]: I0129 17:06:26.996701 4813 scope.go:117] "RemoveContainer" containerID="a661203ef1d1c5865964e620ae1da6ba70aeb836bd04db252fdf5da8043001c2" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.030792 4813 scope.go:117] "RemoveContainer" containerID="2a119d6cd3391339c60a7d61a292d8fdc6f031b8d75829ab322a43fe2c01d8cf" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.071286 4813 scope.go:117] "RemoveContainer" containerID="96ce0e98214ecc3dca853a25bbf06658a39336d51fa186c97aff7d03e5d42077" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.093305 4813 scope.go:117] "RemoveContainer" containerID="5f48223a92a9444cb85942e403ab8901d1e45d0547889895defb0452c6c9b52a" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.114238 4813 scope.go:117] "RemoveContainer" containerID="8b8b67b8a39e84e66dc8143533661febf68e6872996e2632319803eba49bed81" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.132163 4813 scope.go:117] "RemoveContainer" containerID="b67fdc4d54a98668ffd876dfecd0de8419e745636df96be1c039e05ff0e43319" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.151484 4813 scope.go:117] "RemoveContainer" containerID="f6b4e2de6fc7c170ae3a44784a2550fe9843c7ca3c2cd3058e85cb5f72c2a9e2" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.171865 4813 scope.go:117] "RemoveContainer" containerID="7a62641b8e0956ea67aaca50700652a0aaf86c4bf3d1382bbda5d5d373dbf256" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.199656 4813 scope.go:117] "RemoveContainer" containerID="1d6227c2caf355a9f0fbf758a68ba38abb2b953c39c33eb99281e58b465e18ff" Jan 29 17:06:27 crc kubenswrapper[4813]: I0129 17:06:27.237925 4813 scope.go:117] "RemoveContainer" containerID="b2561ed4f6f62f8ecb439c1fcdc29a4178bfc26d6202bdfabc617cd3c474d99c" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.245056 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.246022 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784101 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784719 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784736 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784751 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784757 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="extract-content" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784765 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="extract-content" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="registry-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784780 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="registry-server" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784788 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784793 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784804 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="extract-utilities" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784810 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="extract-utilities" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784821 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784827 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784837 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784844 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-server" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784857 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784864 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784878 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-httpd" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784886 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-httpd" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784899 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-reaper" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784907 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-reaper" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784916 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-expirer" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784925 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-expirer" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784936 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784944 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784953 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="rsync" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784963 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="rsync" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784974 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.784982 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-server" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.784996 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="swift-recon-cron" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785004 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="swift-recon-cron" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785022 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785031 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785043 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785050 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785059 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server-init" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785068 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server-init" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785077 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785083 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785095 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-api" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785103 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-api" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785137 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785145 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-server" Jan 29 17:06:38 crc kubenswrapper[4813]: E0129 17:06:38.785160 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785169 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785336 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-expirer" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785350 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785363 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="rsync" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785376 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785391 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785406 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-updater" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785417 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785426 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="swift-recon-cron" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785440 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovsdb-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785452 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785467 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-httpd" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785475 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa06e8d-be9e-4451-8387-d3ec49dd8306" containerName="neutron-api" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785485 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-auditor" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785496 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-reaper" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785505 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="container-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785514 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="048662bc-eb7a-4a08-b4ec-52905bcf6f47" containerName="registry-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785525 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa0a9e1-c775-4e0b-8286-ef272885c653" containerName="ovs-vswitchd" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785540 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785549 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="account-server" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.785557 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2aa8580-8c90-4607-a906-c039e1e4c111" containerName="object-replicator" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.786650 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.799813 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.906919 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.907183 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:38 crc kubenswrapper[4813]: I0129 17:06:38.907484 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vhw\" (UniqueName: \"kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.009634 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.009726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vhw\" (UniqueName: \"kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.009780 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.010314 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.010440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.034429 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vhw\" (UniqueName: \"kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw\") pod \"redhat-operators-xtkdm\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.110171 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:39 crc kubenswrapper[4813]: I0129 17:06:39.566883 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:06:40 crc kubenswrapper[4813]: I0129 17:06:40.421590 4813 generic.go:334] "Generic (PLEG): container finished" podID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerID="c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57" exitCode=0 Jan 29 17:06:40 crc kubenswrapper[4813]: I0129 17:06:40.421652 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerDied","Data":"c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57"} Jan 29 17:06:40 crc kubenswrapper[4813]: I0129 17:06:40.421920 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerStarted","Data":"d7840c5e7c1a7f16877faf07f6f316dc6ba7471451ee9920cae0aa16fcc7ef2f"} Jan 29 17:06:41 crc kubenswrapper[4813]: I0129 17:06:41.430200 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerStarted","Data":"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2"} Jan 29 17:06:50 crc kubenswrapper[4813]: I0129 17:06:50.891589 4813 generic.go:334] "Generic (PLEG): container finished" podID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerID="24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2" exitCode=0 Jan 29 17:06:50 crc kubenswrapper[4813]: I0129 17:06:50.891706 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerDied","Data":"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2"} Jan 29 17:06:52 crc kubenswrapper[4813]: I0129 17:06:52.908676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerStarted","Data":"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791"} Jan 29 17:06:52 crc kubenswrapper[4813]: I0129 17:06:52.928618 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtkdm" podStartSLOduration=3.254658175 podStartE2EDuration="14.928591178s" podCreationTimestamp="2026-01-29 17:06:38 +0000 UTC" firstStartedPulling="2026-01-29 17:06:40.423321015 +0000 UTC m=+2252.910524231" lastFinishedPulling="2026-01-29 17:06:52.097254018 +0000 UTC m=+2264.584457234" observedRunningTime="2026-01-29 17:06:52.925321945 +0000 UTC m=+2265.412525171" watchObservedRunningTime="2026-01-29 17:06:52.928591178 +0000 UTC m=+2265.415794394" Jan 29 17:06:53 crc kubenswrapper[4813]: I0129 17:06:53.239486 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:06:53 crc kubenswrapper[4813]: E0129 17:06:53.239751 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:06:59 crc kubenswrapper[4813]: I0129 17:06:59.111010 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:59 crc kubenswrapper[4813]: I0129 17:06:59.111410 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:06:59 crc kubenswrapper[4813]: I0129 17:06:59.161707 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:07:00 crc kubenswrapper[4813]: I0129 17:07:00.022200 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:07:00 crc kubenswrapper[4813]: I0129 17:07:00.082430 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:07:01 crc kubenswrapper[4813]: I0129 17:07:01.985914 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtkdm" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="registry-server" containerID="cri-o://9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791" gracePeriod=2 Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.410147 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.485399 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities\") pod \"a52b936c-4e52-4f93-ab27-d231df94f6cd\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.485920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content\") pod \"a52b936c-4e52-4f93-ab27-d231df94f6cd\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.486045 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vhw\" (UniqueName: \"kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw\") pod \"a52b936c-4e52-4f93-ab27-d231df94f6cd\" (UID: \"a52b936c-4e52-4f93-ab27-d231df94f6cd\") " Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.486497 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities" (OuterVolumeSpecName: "utilities") pod "a52b936c-4e52-4f93-ab27-d231df94f6cd" (UID: "a52b936c-4e52-4f93-ab27-d231df94f6cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.499363 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw" (OuterVolumeSpecName: "kube-api-access-96vhw") pod "a52b936c-4e52-4f93-ab27-d231df94f6cd" (UID: "a52b936c-4e52-4f93-ab27-d231df94f6cd"). InnerVolumeSpecName "kube-api-access-96vhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.588670 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.589208 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vhw\" (UniqueName: \"kubernetes.io/projected/a52b936c-4e52-4f93-ab27-d231df94f6cd-kube-api-access-96vhw\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.618517 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a52b936c-4e52-4f93-ab27-d231df94f6cd" (UID: "a52b936c-4e52-4f93-ab27-d231df94f6cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.691037 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a52b936c-4e52-4f93-ab27-d231df94f6cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.997389 4813 generic.go:334] "Generic (PLEG): container finished" podID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerID="9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791" exitCode=0 Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.997432 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerDied","Data":"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791"} Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.997458 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtkdm" event={"ID":"a52b936c-4e52-4f93-ab27-d231df94f6cd","Type":"ContainerDied","Data":"d7840c5e7c1a7f16877faf07f6f316dc6ba7471451ee9920cae0aa16fcc7ef2f"} Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.997459 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtkdm" Jan 29 17:07:02 crc kubenswrapper[4813]: I0129 17:07:02.997520 4813 scope.go:117] "RemoveContainer" containerID="9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.019758 4813 scope.go:117] "RemoveContainer" containerID="24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.037813 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.061131 4813 scope.go:117] "RemoveContainer" containerID="c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.070503 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtkdm"] Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.084976 4813 scope.go:117] "RemoveContainer" containerID="9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791" Jan 29 17:07:03 crc kubenswrapper[4813]: E0129 17:07:03.085650 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791\": container with ID starting with 9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791 not found: ID does not exist" containerID="9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.085683 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791"} err="failed to get container status \"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791\": rpc error: code = NotFound desc = could not find container \"9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791\": container with ID starting with 9c55c79d99fb9e4ad1b7bb7d66bbe40664b94146eaee7b776e78fecef6611791 not found: ID does not exist" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.085705 4813 scope.go:117] "RemoveContainer" containerID="24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2" Jan 29 17:07:03 crc kubenswrapper[4813]: E0129 17:07:03.086190 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2\": container with ID starting with 24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2 not found: ID does not exist" containerID="24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.086232 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2"} err="failed to get container status \"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2\": rpc error: code = NotFound desc = could not find container \"24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2\": container with ID starting with 24c6b2eb31527a065b754639955122f384c50db165702e2c1d6a0597c7b508a2 not found: ID does not exist" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.086251 4813 scope.go:117] "RemoveContainer" containerID="c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57" Jan 29 17:07:03 crc kubenswrapper[4813]: E0129 17:07:03.086487 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57\": container with ID starting with c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57 not found: ID does not exist" containerID="c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57" Jan 29 17:07:03 crc kubenswrapper[4813]: I0129 17:07:03.086512 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57"} err="failed to get container status \"c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57\": rpc error: code = NotFound desc = could not find container \"c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57\": container with ID starting with c94b819822cc03090ae866fe1c0d07a5450f34f4a8cda31080f10b94a9322a57 not found: ID does not exist" Jan 29 17:07:04 crc kubenswrapper[4813]: I0129 17:07:04.248274 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" path="/var/lib/kubelet/pods/a52b936c-4e52-4f93-ab27-d231df94f6cd/volumes" Jan 29 17:07:08 crc kubenswrapper[4813]: I0129 17:07:08.255155 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:07:08 crc kubenswrapper[4813]: E0129 17:07:08.255755 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:07:22 crc kubenswrapper[4813]: I0129 17:07:22.239547 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:07:22 crc kubenswrapper[4813]: E0129 17:07:22.240350 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:07:28 crc kubenswrapper[4813]: I0129 17:07:28.024825 4813 scope.go:117] "RemoveContainer" containerID="4d43b35f24d6bb7052db0c2918b21d113c70aee5d59fd072a7ca4d61d18aaa55" Jan 29 17:07:28 crc kubenswrapper[4813]: I0129 17:07:28.061464 4813 scope.go:117] "RemoveContainer" containerID="d38688200fdc81a1702950a866899b4b4670339d7a38d48f3b9c0380938d77ec" Jan 29 17:07:37 crc kubenswrapper[4813]: I0129 17:07:37.239773 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:07:37 crc kubenswrapper[4813]: E0129 17:07:37.240509 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:07:48 crc kubenswrapper[4813]: I0129 17:07:48.243995 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:07:48 crc kubenswrapper[4813]: E0129 17:07:48.244698 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:08:02 crc kubenswrapper[4813]: I0129 17:08:02.240432 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:08:02 crc kubenswrapper[4813]: E0129 17:08:02.241167 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:08:16 crc kubenswrapper[4813]: I0129 17:08:16.240101 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:08:16 crc kubenswrapper[4813]: E0129 17:08:16.241378 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:08:30 crc kubenswrapper[4813]: I0129 17:08:30.239743 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:08:30 crc kubenswrapper[4813]: E0129 17:08:30.240566 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:08:43 crc kubenswrapper[4813]: I0129 17:08:43.239988 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:08:43 crc kubenswrapper[4813]: E0129 17:08:43.240778 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:08:54 crc kubenswrapper[4813]: I0129 17:08:54.240198 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:08:54 crc kubenswrapper[4813]: E0129 17:08:54.241039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:09:07 crc kubenswrapper[4813]: I0129 17:09:07.240472 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:09:07 crc kubenswrapper[4813]: E0129 17:09:07.241173 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.919867 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:10 crc kubenswrapper[4813]: E0129 17:09:10.920526 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="extract-content" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.920537 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="extract-content" Jan 29 17:09:10 crc kubenswrapper[4813]: E0129 17:09:10.920559 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="extract-utilities" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.920565 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="extract-utilities" Jan 29 17:09:10 crc kubenswrapper[4813]: E0129 17:09:10.920576 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="registry-server" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.920582 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="registry-server" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.920711 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52b936c-4e52-4f93-ab27-d231df94f6cd" containerName="registry-server" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.921597 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:10 crc kubenswrapper[4813]: I0129 17:09:10.934622 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.025301 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswhk\" (UniqueName: \"kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.025505 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.025597 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.126989 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.127064 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.127771 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.127829 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.128062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswhk\" (UniqueName: \"kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.166630 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswhk\" (UniqueName: \"kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk\") pod \"certified-operators-z5q8d\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.247166 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:11 crc kubenswrapper[4813]: I0129 17:09:11.694815 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:12 crc kubenswrapper[4813]: I0129 17:09:12.288474 4813 generic.go:334] "Generic (PLEG): container finished" podID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerID="a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543" exitCode=0 Jan 29 17:09:12 crc kubenswrapper[4813]: I0129 17:09:12.288520 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerDied","Data":"a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543"} Jan 29 17:09:12 crc kubenswrapper[4813]: I0129 17:09:12.288547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerStarted","Data":"d5ffa05d4a2197d81f247c0d7db6d072ef3a34108f3fc014c8fd4f05229c9c0e"} Jan 29 17:09:13 crc kubenswrapper[4813]: I0129 17:09:13.297224 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerStarted","Data":"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e"} Jan 29 17:09:14 crc kubenswrapper[4813]: I0129 17:09:14.308576 4813 generic.go:334] "Generic (PLEG): container finished" podID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerID="2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e" exitCode=0 Jan 29 17:09:14 crc kubenswrapper[4813]: I0129 17:09:14.308674 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerDied","Data":"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e"} Jan 29 17:09:15 crc kubenswrapper[4813]: I0129 17:09:15.316699 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerStarted","Data":"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd"} Jan 29 17:09:15 crc kubenswrapper[4813]: I0129 17:09:15.339365 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z5q8d" podStartSLOduration=2.612783361 podStartE2EDuration="5.339342792s" podCreationTimestamp="2026-01-29 17:09:10 +0000 UTC" firstStartedPulling="2026-01-29 17:09:12.292409999 +0000 UTC m=+2404.779613225" lastFinishedPulling="2026-01-29 17:09:15.01896944 +0000 UTC m=+2407.506172656" observedRunningTime="2026-01-29 17:09:15.333465337 +0000 UTC m=+2407.820668553" watchObservedRunningTime="2026-01-29 17:09:15.339342792 +0000 UTC m=+2407.826546008" Jan 29 17:09:19 crc kubenswrapper[4813]: I0129 17:09:19.240309 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:09:19 crc kubenswrapper[4813]: E0129 17:09:19.241046 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:09:21 crc kubenswrapper[4813]: I0129 17:09:21.247328 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:21 crc kubenswrapper[4813]: I0129 17:09:21.247397 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:21 crc kubenswrapper[4813]: I0129 17:09:21.298434 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:21 crc kubenswrapper[4813]: I0129 17:09:21.392029 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:21 crc kubenswrapper[4813]: I0129 17:09:21.525011 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:23 crc kubenswrapper[4813]: I0129 17:09:23.367093 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z5q8d" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="registry-server" containerID="cri-o://09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd" gracePeriod=2 Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.245236 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.376592 4813 generic.go:334] "Generic (PLEG): container finished" podID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerID="09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd" exitCode=0 Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.376630 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerDied","Data":"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd"} Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.376676 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z5q8d" event={"ID":"a263814f-f1ed-4741-85d5-9cb225bcc8cd","Type":"ContainerDied","Data":"d5ffa05d4a2197d81f247c0d7db6d072ef3a34108f3fc014c8fd4f05229c9c0e"} Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.376693 4813 scope.go:117] "RemoveContainer" containerID="09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.376926 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z5q8d" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.392905 4813 scope.go:117] "RemoveContainer" containerID="2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.410210 4813 scope.go:117] "RemoveContainer" containerID="a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.422135 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities\") pod \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.422232 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content\") pod \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.422271 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fswhk\" (UniqueName: \"kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk\") pod \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\" (UID: \"a263814f-f1ed-4741-85d5-9cb225bcc8cd\") " Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.423062 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities" (OuterVolumeSpecName: "utilities") pod "a263814f-f1ed-4741-85d5-9cb225bcc8cd" (UID: "a263814f-f1ed-4741-85d5-9cb225bcc8cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.428260 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk" (OuterVolumeSpecName: "kube-api-access-fswhk") pod "a263814f-f1ed-4741-85d5-9cb225bcc8cd" (UID: "a263814f-f1ed-4741-85d5-9cb225bcc8cd"). InnerVolumeSpecName "kube-api-access-fswhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.435242 4813 scope.go:117] "RemoveContainer" containerID="09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd" Jan 29 17:09:24 crc kubenswrapper[4813]: E0129 17:09:24.435707 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd\": container with ID starting with 09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd not found: ID does not exist" containerID="09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.435740 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd"} err="failed to get container status \"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd\": rpc error: code = NotFound desc = could not find container \"09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd\": container with ID starting with 09b1237922792bbb69985778019f4ee5730affbd251c7b6a60c26c3bd8edd3dd not found: ID does not exist" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.435764 4813 scope.go:117] "RemoveContainer" containerID="2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e" Jan 29 17:09:24 crc kubenswrapper[4813]: E0129 17:09:24.436096 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e\": container with ID starting with 2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e not found: ID does not exist" containerID="2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.436137 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e"} err="failed to get container status \"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e\": rpc error: code = NotFound desc = could not find container \"2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e\": container with ID starting with 2de8d974f56a75e5e75e86fab6b854a972468a35695ad56a2e71378542408c0e not found: ID does not exist" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.436150 4813 scope.go:117] "RemoveContainer" containerID="a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543" Jan 29 17:09:24 crc kubenswrapper[4813]: E0129 17:09:24.436441 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543\": container with ID starting with a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543 not found: ID does not exist" containerID="a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.436485 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543"} err="failed to get container status \"a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543\": rpc error: code = NotFound desc = could not find container \"a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543\": container with ID starting with a30ae9ce8008c5bc6138e7ad0a687a59ca7c417e6badec7ccb67e2dca4707543 not found: ID does not exist" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.472601 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a263814f-f1ed-4741-85d5-9cb225bcc8cd" (UID: "a263814f-f1ed-4741-85d5-9cb225bcc8cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.524266 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.524305 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fswhk\" (UniqueName: \"kubernetes.io/projected/a263814f-f1ed-4741-85d5-9cb225bcc8cd-kube-api-access-fswhk\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.524315 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a263814f-f1ed-4741-85d5-9cb225bcc8cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.715429 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:24 crc kubenswrapper[4813]: I0129 17:09:24.726477 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z5q8d"] Jan 29 17:09:26 crc kubenswrapper[4813]: I0129 17:09:26.254028 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" path="/var/lib/kubelet/pods/a263814f-f1ed-4741-85d5-9cb225bcc8cd/volumes" Jan 29 17:09:32 crc kubenswrapper[4813]: I0129 17:09:32.240595 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:09:32 crc kubenswrapper[4813]: E0129 17:09:32.241568 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:09:46 crc kubenswrapper[4813]: I0129 17:09:46.240586 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:09:46 crc kubenswrapper[4813]: E0129 17:09:46.241673 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:09:59 crc kubenswrapper[4813]: I0129 17:09:59.240178 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:09:59 crc kubenswrapper[4813]: E0129 17:09:59.241536 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:10:11 crc kubenswrapper[4813]: I0129 17:10:11.240170 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:10:11 crc kubenswrapper[4813]: E0129 17:10:11.241068 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:10:23 crc kubenswrapper[4813]: I0129 17:10:23.239828 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:10:23 crc kubenswrapper[4813]: E0129 17:10:23.240596 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:10:37 crc kubenswrapper[4813]: I0129 17:10:37.239467 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:10:37 crc kubenswrapper[4813]: E0129 17:10:37.240286 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:10:50 crc kubenswrapper[4813]: I0129 17:10:50.240983 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:10:50 crc kubenswrapper[4813]: E0129 17:10:50.241857 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:11:05 crc kubenswrapper[4813]: I0129 17:11:05.239929 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:11:06 crc kubenswrapper[4813]: I0129 17:11:06.141403 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965"} Jan 29 17:13:30 crc kubenswrapper[4813]: I0129 17:13:30.240686 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:13:30 crc kubenswrapper[4813]: I0129 17:13:30.241250 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:14:00 crc kubenswrapper[4813]: I0129 17:14:00.240550 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:14:00 crc kubenswrapper[4813]: I0129 17:14:00.241218 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.240264 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.241032 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.249715 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.250290 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.250349 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965" gracePeriod=600 Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.534258 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965" exitCode=0 Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.534578 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965"} Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.534611 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095"} Jan 29 17:14:30 crc kubenswrapper[4813]: I0129 17:14:30.534628 4813 scope.go:117] "RemoveContainer" containerID="bdea96fea29b09778820e965f7737c742a5bbc1c18fe8d0173518a33654e3727" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.146854 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k"] Jan 29 17:15:00 crc kubenswrapper[4813]: E0129 17:15:00.148575 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="extract-utilities" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.148612 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="extract-utilities" Jan 29 17:15:00 crc kubenswrapper[4813]: E0129 17:15:00.148627 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.148635 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4813]: E0129 17:15:00.148664 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="extract-content" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.148672 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="extract-content" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.148819 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="a263814f-f1ed-4741-85d5-9cb225bcc8cd" containerName="registry-server" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.149451 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.153256 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.153675 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.157486 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k"] Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.312649 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.312725 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24t6\" (UniqueName: \"kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.312839 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.414187 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.414260 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24t6\" (UniqueName: \"kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.414287 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.415458 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.431217 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.434170 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24t6\" (UniqueName: \"kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6\") pod \"collect-profiles-29495115-nwx4k\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.474215 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:00 crc kubenswrapper[4813]: I0129 17:15:00.934156 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k"] Jan 29 17:15:01 crc kubenswrapper[4813]: I0129 17:15:01.732334 4813 generic.go:334] "Generic (PLEG): container finished" podID="209f55e2-15cf-4815-852b-39fc42ef65b4" containerID="32ec312aacbc3b359bce406a14fa4404107f94f0985322605bcaedab7a731447" exitCode=0 Jan 29 17:15:01 crc kubenswrapper[4813]: I0129 17:15:01.732592 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" event={"ID":"209f55e2-15cf-4815-852b-39fc42ef65b4","Type":"ContainerDied","Data":"32ec312aacbc3b359bce406a14fa4404107f94f0985322605bcaedab7a731447"} Jan 29 17:15:01 crc kubenswrapper[4813]: I0129 17:15:01.732615 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" event={"ID":"209f55e2-15cf-4815-852b-39fc42ef65b4","Type":"ContainerStarted","Data":"22dd07695aaff28c6a8859297e0306d07cc3b225e2b08a4726cd45e17cbec09c"} Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.026638 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.155928 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume\") pod \"209f55e2-15cf-4815-852b-39fc42ef65b4\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.156015 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume\") pod \"209f55e2-15cf-4815-852b-39fc42ef65b4\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.156162 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24t6\" (UniqueName: \"kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6\") pod \"209f55e2-15cf-4815-852b-39fc42ef65b4\" (UID: \"209f55e2-15cf-4815-852b-39fc42ef65b4\") " Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.157157 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "209f55e2-15cf-4815-852b-39fc42ef65b4" (UID: "209f55e2-15cf-4815-852b-39fc42ef65b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.163942 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "209f55e2-15cf-4815-852b-39fc42ef65b4" (UID: "209f55e2-15cf-4815-852b-39fc42ef65b4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.164004 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6" (OuterVolumeSpecName: "kube-api-access-v24t6") pod "209f55e2-15cf-4815-852b-39fc42ef65b4" (UID: "209f55e2-15cf-4815-852b-39fc42ef65b4"). InnerVolumeSpecName "kube-api-access-v24t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.257759 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/209f55e2-15cf-4815-852b-39fc42ef65b4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.257805 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/209f55e2-15cf-4815-852b-39fc42ef65b4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.257818 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24t6\" (UniqueName: \"kubernetes.io/projected/209f55e2-15cf-4815-852b-39fc42ef65b4-kube-api-access-v24t6\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.746192 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" event={"ID":"209f55e2-15cf-4815-852b-39fc42ef65b4","Type":"ContainerDied","Data":"22dd07695aaff28c6a8859297e0306d07cc3b225e2b08a4726cd45e17cbec09c"} Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.746239 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495115-nwx4k" Jan 29 17:15:03 crc kubenswrapper[4813]: I0129 17:15:03.746256 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22dd07695aaff28c6a8859297e0306d07cc3b225e2b08a4726cd45e17cbec09c" Jan 29 17:15:04 crc kubenswrapper[4813]: I0129 17:15:04.103229 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw"] Jan 29 17:15:04 crc kubenswrapper[4813]: I0129 17:15:04.108437 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pkrnw"] Jan 29 17:15:04 crc kubenswrapper[4813]: I0129 17:15:04.249131 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37afb82-f85f-47d3-ad0c-5c7c60b74083" path="/var/lib/kubelet/pods/b37afb82-f85f-47d3-ad0c-5c7c60b74083/volumes" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.393868 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:14 crc kubenswrapper[4813]: E0129 17:15:14.394621 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209f55e2-15cf-4815-852b-39fc42ef65b4" containerName="collect-profiles" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.394633 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="209f55e2-15cf-4815-852b-39fc42ef65b4" containerName="collect-profiles" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.394756 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="209f55e2-15cf-4815-852b-39fc42ef65b4" containerName="collect-profiles" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.395690 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.410297 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.534393 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.534436 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.534494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwxf\" (UniqueName: \"kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.641511 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.641615 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.641712 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwxf\" (UniqueName: \"kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.642666 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.642974 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.672533 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwxf\" (UniqueName: \"kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf\") pod \"community-operators-qxkks\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:14 crc kubenswrapper[4813]: I0129 17:15:14.721265 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:15 crc kubenswrapper[4813]: I0129 17:15:15.075442 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:15 crc kubenswrapper[4813]: I0129 17:15:15.826355 4813 generic.go:334] "Generic (PLEG): container finished" podID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerID="8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352" exitCode=0 Jan 29 17:15:15 crc kubenswrapper[4813]: I0129 17:15:15.826400 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerDied","Data":"8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352"} Jan 29 17:15:15 crc kubenswrapper[4813]: I0129 17:15:15.826426 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerStarted","Data":"49c4424947540b81235fb3bc825f88fb88f982a689610b95495465413701b566"} Jan 29 17:15:15 crc kubenswrapper[4813]: I0129 17:15:15.828913 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:15:17 crc kubenswrapper[4813]: I0129 17:15:17.839683 4813 generic.go:334] "Generic (PLEG): container finished" podID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerID="577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f" exitCode=0 Jan 29 17:15:17 crc kubenswrapper[4813]: I0129 17:15:17.839766 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerDied","Data":"577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f"} Jan 29 17:15:18 crc kubenswrapper[4813]: I0129 17:15:18.849382 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerStarted","Data":"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d"} Jan 29 17:15:18 crc kubenswrapper[4813]: I0129 17:15:18.868502 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxkks" podStartSLOduration=2.48487936 podStartE2EDuration="4.868487327s" podCreationTimestamp="2026-01-29 17:15:14 +0000 UTC" firstStartedPulling="2026-01-29 17:15:15.828712236 +0000 UTC m=+2768.315915452" lastFinishedPulling="2026-01-29 17:15:18.212320203 +0000 UTC m=+2770.699523419" observedRunningTime="2026-01-29 17:15:18.865834942 +0000 UTC m=+2771.353038158" watchObservedRunningTime="2026-01-29 17:15:18.868487327 +0000 UTC m=+2771.355690543" Jan 29 17:15:24 crc kubenswrapper[4813]: I0129 17:15:24.721457 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:24 crc kubenswrapper[4813]: I0129 17:15:24.722106 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:24 crc kubenswrapper[4813]: I0129 17:15:24.771049 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:24 crc kubenswrapper[4813]: I0129 17:15:24.940884 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:25 crc kubenswrapper[4813]: I0129 17:15:25.002051 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:26 crc kubenswrapper[4813]: I0129 17:15:26.910376 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxkks" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="registry-server" containerID="cri-o://ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d" gracePeriod=2 Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.294605 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.318643 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities\") pod \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.318949 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content\") pod \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.319055 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwxf\" (UniqueName: \"kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf\") pod \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\" (UID: \"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b\") " Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.321185 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities" (OuterVolumeSpecName: "utilities") pod "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" (UID: "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.324813 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf" (OuterVolumeSpecName: "kube-api-access-zqwxf") pod "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" (UID: "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b"). InnerVolumeSpecName "kube-api-access-zqwxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.382096 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" (UID: "70a8e8a3-7327-4b9e-bb8c-f800d1b2831b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.420000 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.420041 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.420057 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwxf\" (UniqueName: \"kubernetes.io/projected/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b-kube-api-access-zqwxf\") on node \"crc\" DevicePath \"\"" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.919084 4813 generic.go:334] "Generic (PLEG): container finished" podID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerID="ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d" exitCode=0 Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.919150 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkks" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.920154 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerDied","Data":"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d"} Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.920250 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkks" event={"ID":"70a8e8a3-7327-4b9e-bb8c-f800d1b2831b","Type":"ContainerDied","Data":"49c4424947540b81235fb3bc825f88fb88f982a689610b95495465413701b566"} Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.920293 4813 scope.go:117] "RemoveContainer" containerID="ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.942222 4813 scope.go:117] "RemoveContainer" containerID="577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.954188 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.960397 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxkks"] Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.976003 4813 scope.go:117] "RemoveContainer" containerID="8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.993696 4813 scope.go:117] "RemoveContainer" containerID="ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d" Jan 29 17:15:27 crc kubenswrapper[4813]: E0129 17:15:27.994170 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d\": container with ID starting with ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d not found: ID does not exist" containerID="ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.994218 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d"} err="failed to get container status \"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d\": rpc error: code = NotFound desc = could not find container \"ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d\": container with ID starting with ffa85e46f8b754e503551080c94ff071088e3a4083ad13f5c7e2d24ac5a4673d not found: ID does not exist" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.994243 4813 scope.go:117] "RemoveContainer" containerID="577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f" Jan 29 17:15:27 crc kubenswrapper[4813]: E0129 17:15:27.994668 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f\": container with ID starting with 577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f not found: ID does not exist" containerID="577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.994805 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f"} err="failed to get container status \"577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f\": rpc error: code = NotFound desc = could not find container \"577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f\": container with ID starting with 577c3ede0efdcdb904ca3d1f9393007052ef2cdfa6ca26c2390206a5bed6736f not found: ID does not exist" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.994906 4813 scope.go:117] "RemoveContainer" containerID="8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352" Jan 29 17:15:27 crc kubenswrapper[4813]: E0129 17:15:27.995388 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352\": container with ID starting with 8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352 not found: ID does not exist" containerID="8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352" Jan 29 17:15:27 crc kubenswrapper[4813]: I0129 17:15:27.995449 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352"} err="failed to get container status \"8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352\": rpc error: code = NotFound desc = could not find container \"8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352\": container with ID starting with 8d67bb470e4cc6c62699a2e617cd923eb1e487c892388f37175e21928aaae352 not found: ID does not exist" Jan 29 17:15:28 crc kubenswrapper[4813]: I0129 17:15:28.238727 4813 scope.go:117] "RemoveContainer" containerID="9387a32d56cda5ecea6a5147513e78dd169c89b5f381b3e987b327edc3d7f7ce" Jan 29 17:15:28 crc kubenswrapper[4813]: I0129 17:15:28.248984 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" path="/var/lib/kubelet/pods/70a8e8a3-7327-4b9e-bb8c-f800d1b2831b/volumes" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.665493 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:02 crc kubenswrapper[4813]: E0129 17:16:02.668081 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="extract-utilities" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.668397 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="extract-utilities" Jan 29 17:16:02 crc kubenswrapper[4813]: E0129 17:16:02.668518 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="extract-content" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.668641 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="extract-content" Jan 29 17:16:02 crc kubenswrapper[4813]: E0129 17:16:02.668763 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="registry-server" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.668885 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="registry-server" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.669297 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a8e8a3-7327-4b9e-bb8c-f800d1b2831b" containerName="registry-server" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.671786 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.683343 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.717083 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.717179 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxxc\" (UniqueName: \"kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.717222 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.818213 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxxc\" (UniqueName: \"kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.818278 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.818331 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.818915 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.819448 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.837409 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxxc\" (UniqueName: \"kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc\") pod \"redhat-marketplace-66wd6\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:02 crc kubenswrapper[4813]: I0129 17:16:02.992724 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:03 crc kubenswrapper[4813]: I0129 17:16:03.432664 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:04 crc kubenswrapper[4813]: I0129 17:16:04.136568 4813 generic.go:334] "Generic (PLEG): container finished" podID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerID="a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31" exitCode=0 Jan 29 17:16:04 crc kubenswrapper[4813]: I0129 17:16:04.136612 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerDied","Data":"a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31"} Jan 29 17:16:04 crc kubenswrapper[4813]: I0129 17:16:04.136636 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerStarted","Data":"9142e5f2746d5fa8b30592b4e90a530cb8a1f6ef881158fd18d1a6e86706f51a"} Jan 29 17:16:07 crc kubenswrapper[4813]: I0129 17:16:07.157396 4813 generic.go:334] "Generic (PLEG): container finished" podID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerID="30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1" exitCode=0 Jan 29 17:16:07 crc kubenswrapper[4813]: I0129 17:16:07.157446 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerDied","Data":"30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1"} Jan 29 17:16:08 crc kubenswrapper[4813]: I0129 17:16:08.166691 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerStarted","Data":"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b"} Jan 29 17:16:08 crc kubenswrapper[4813]: I0129 17:16:08.185933 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-66wd6" podStartSLOduration=2.570460323 podStartE2EDuration="6.185915227s" podCreationTimestamp="2026-01-29 17:16:02 +0000 UTC" firstStartedPulling="2026-01-29 17:16:04.139323046 +0000 UTC m=+2816.626526262" lastFinishedPulling="2026-01-29 17:16:07.75477794 +0000 UTC m=+2820.241981166" observedRunningTime="2026-01-29 17:16:08.183382126 +0000 UTC m=+2820.670585352" watchObservedRunningTime="2026-01-29 17:16:08.185915227 +0000 UTC m=+2820.673118443" Jan 29 17:16:12 crc kubenswrapper[4813]: I0129 17:16:12.993216 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:12 crc kubenswrapper[4813]: I0129 17:16:12.993546 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:13 crc kubenswrapper[4813]: I0129 17:16:13.042092 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:13 crc kubenswrapper[4813]: I0129 17:16:13.239330 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:13 crc kubenswrapper[4813]: I0129 17:16:13.286721 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.211265 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-66wd6" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="registry-server" containerID="cri-o://2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b" gracePeriod=2 Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.643498 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.693508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities\") pod \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.694594 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities" (OuterVolumeSpecName: "utilities") pod "e00a0bc7-b4f1-4542-b9dd-63313b498ed4" (UID: "e00a0bc7-b4f1-4542-b9dd-63313b498ed4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.794583 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czxxc\" (UniqueName: \"kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc\") pod \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.795009 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content\") pod \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\" (UID: \"e00a0bc7-b4f1-4542-b9dd-63313b498ed4\") " Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.795286 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.803375 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc" (OuterVolumeSpecName: "kube-api-access-czxxc") pod "e00a0bc7-b4f1-4542-b9dd-63313b498ed4" (UID: "e00a0bc7-b4f1-4542-b9dd-63313b498ed4"). InnerVolumeSpecName "kube-api-access-czxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.820874 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e00a0bc7-b4f1-4542-b9dd-63313b498ed4" (UID: "e00a0bc7-b4f1-4542-b9dd-63313b498ed4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.896666 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czxxc\" (UniqueName: \"kubernetes.io/projected/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-kube-api-access-czxxc\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:15 crc kubenswrapper[4813]: I0129 17:16:15.896704 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e00a0bc7-b4f1-4542-b9dd-63313b498ed4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.220181 4813 generic.go:334] "Generic (PLEG): container finished" podID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerID="2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b" exitCode=0 Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.220234 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerDied","Data":"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b"} Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.220255 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-66wd6" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.220271 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-66wd6" event={"ID":"e00a0bc7-b4f1-4542-b9dd-63313b498ed4","Type":"ContainerDied","Data":"9142e5f2746d5fa8b30592b4e90a530cb8a1f6ef881158fd18d1a6e86706f51a"} Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.220289 4813 scope.go:117] "RemoveContainer" containerID="2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.243287 4813 scope.go:117] "RemoveContainer" containerID="30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.258249 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.263204 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-66wd6"] Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.280717 4813 scope.go:117] "RemoveContainer" containerID="a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.295577 4813 scope.go:117] "RemoveContainer" containerID="2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b" Jan 29 17:16:16 crc kubenswrapper[4813]: E0129 17:16:16.296062 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b\": container with ID starting with 2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b not found: ID does not exist" containerID="2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.296236 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b"} err="failed to get container status \"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b\": rpc error: code = NotFound desc = could not find container \"2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b\": container with ID starting with 2aebd3eccd03e4406386d1732865af2e2a2b960a680886eef376afb16e943a8b not found: ID does not exist" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.296271 4813 scope.go:117] "RemoveContainer" containerID="30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1" Jan 29 17:16:16 crc kubenswrapper[4813]: E0129 17:16:16.296663 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1\": container with ID starting with 30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1 not found: ID does not exist" containerID="30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.296744 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1"} err="failed to get container status \"30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1\": rpc error: code = NotFound desc = could not find container \"30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1\": container with ID starting with 30d96fe3b5761a999e9a18652ffe121f5e47744aa5ce8ae8a55fbf8e1b9ce3a1 not found: ID does not exist" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.296801 4813 scope.go:117] "RemoveContainer" containerID="a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31" Jan 29 17:16:16 crc kubenswrapper[4813]: E0129 17:16:16.297106 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31\": container with ID starting with a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31 not found: ID does not exist" containerID="a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31" Jan 29 17:16:16 crc kubenswrapper[4813]: I0129 17:16:16.297216 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31"} err="failed to get container status \"a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31\": rpc error: code = NotFound desc = could not find container \"a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31\": container with ID starting with a5a4cb6548b0f5f6fb1a343e628b8f525c4a2f03cba7fd0ff6d9c0b27d199f31 not found: ID does not exist" Jan 29 17:16:18 crc kubenswrapper[4813]: I0129 17:16:18.247701 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" path="/var/lib/kubelet/pods/e00a0bc7-b4f1-4542-b9dd-63313b498ed4/volumes" Jan 29 17:16:30 crc kubenswrapper[4813]: I0129 17:16:30.239763 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:16:30 crc kubenswrapper[4813]: I0129 17:16:30.240362 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:17:00 crc kubenswrapper[4813]: I0129 17:17:00.249933 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:17:00 crc kubenswrapper[4813]: I0129 17:17:00.251094 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.992809 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:27 crc kubenswrapper[4813]: E0129 17:17:27.993565 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="extract-utilities" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.993577 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="extract-utilities" Jan 29 17:17:27 crc kubenswrapper[4813]: E0129 17:17:27.993591 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="registry-server" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.993598 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="registry-server" Jan 29 17:17:27 crc kubenswrapper[4813]: E0129 17:17:27.993613 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="extract-content" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.993620 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="extract-content" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.993753 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00a0bc7-b4f1-4542-b9dd-63313b498ed4" containerName="registry-server" Jan 29 17:17:27 crc kubenswrapper[4813]: I0129 17:17:27.994652 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.008655 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.081220 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.081391 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6vz\" (UniqueName: \"kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.081489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.182424 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.182732 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6vz\" (UniqueName: \"kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.182848 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.182913 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.183411 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.203736 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6vz\" (UniqueName: \"kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz\") pod \"redhat-operators-gnfpf\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.310435 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:28 crc kubenswrapper[4813]: I0129 17:17:28.733255 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:29 crc kubenswrapper[4813]: I0129 17:17:29.724667 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ff76202-bd79-414d-9a90-2535a8966f72" containerID="52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d" exitCode=0 Jan 29 17:17:29 crc kubenswrapper[4813]: I0129 17:17:29.724799 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerDied","Data":"52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d"} Jan 29 17:17:29 crc kubenswrapper[4813]: I0129 17:17:29.725011 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerStarted","Data":"2263423ddf34860197b93eff42e73acf479ca52b3036bf9d249bb033043fd074"} Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.240904 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.240971 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.248766 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.249547 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.249612 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" gracePeriod=600 Jan 29 17:17:30 crc kubenswrapper[4813]: E0129 17:17:30.391953 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.733344 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerStarted","Data":"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1"} Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.736308 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" exitCode=0 Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.736356 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095"} Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.736394 4813 scope.go:117] "RemoveContainer" containerID="6d0aa7efce0bc9b386a4ec5e02992d44c6a4525992811def1b1518b0d270a965" Jan 29 17:17:30 crc kubenswrapper[4813]: I0129 17:17:30.737004 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:17:30 crc kubenswrapper[4813]: E0129 17:17:30.737272 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:17:31 crc kubenswrapper[4813]: I0129 17:17:31.750711 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ff76202-bd79-414d-9a90-2535a8966f72" containerID="d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1" exitCode=0 Jan 29 17:17:31 crc kubenswrapper[4813]: I0129 17:17:31.750785 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerDied","Data":"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1"} Jan 29 17:17:32 crc kubenswrapper[4813]: I0129 17:17:32.765484 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerStarted","Data":"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0"} Jan 29 17:17:32 crc kubenswrapper[4813]: I0129 17:17:32.788898 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gnfpf" podStartSLOduration=3.041690573 podStartE2EDuration="5.788878271s" podCreationTimestamp="2026-01-29 17:17:27 +0000 UTC" firstStartedPulling="2026-01-29 17:17:29.726638505 +0000 UTC m=+2902.213841711" lastFinishedPulling="2026-01-29 17:17:32.473826193 +0000 UTC m=+2904.961029409" observedRunningTime="2026-01-29 17:17:32.783476238 +0000 UTC m=+2905.270679504" watchObservedRunningTime="2026-01-29 17:17:32.788878271 +0000 UTC m=+2905.276081487" Jan 29 17:17:38 crc kubenswrapper[4813]: I0129 17:17:38.311323 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:38 crc kubenswrapper[4813]: I0129 17:17:38.311860 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:38 crc kubenswrapper[4813]: I0129 17:17:38.349870 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:38 crc kubenswrapper[4813]: I0129 17:17:38.838797 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:38 crc kubenswrapper[4813]: I0129 17:17:38.892477 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:40 crc kubenswrapper[4813]: I0129 17:17:40.811907 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gnfpf" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="registry-server" containerID="cri-o://7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0" gracePeriod=2 Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.193391 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.360172 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities\") pod \"8ff76202-bd79-414d-9a90-2535a8966f72\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.360293 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw6vz\" (UniqueName: \"kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz\") pod \"8ff76202-bd79-414d-9a90-2535a8966f72\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.360315 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content\") pod \"8ff76202-bd79-414d-9a90-2535a8966f72\" (UID: \"8ff76202-bd79-414d-9a90-2535a8966f72\") " Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.361194 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities" (OuterVolumeSpecName: "utilities") pod "8ff76202-bd79-414d-9a90-2535a8966f72" (UID: "8ff76202-bd79-414d-9a90-2535a8966f72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.367304 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz" (OuterVolumeSpecName: "kube-api-access-jw6vz") pod "8ff76202-bd79-414d-9a90-2535a8966f72" (UID: "8ff76202-bd79-414d-9a90-2535a8966f72"). InnerVolumeSpecName "kube-api-access-jw6vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.461638 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.461672 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw6vz\" (UniqueName: \"kubernetes.io/projected/8ff76202-bd79-414d-9a90-2535a8966f72-kube-api-access-jw6vz\") on node \"crc\" DevicePath \"\"" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.819907 4813 generic.go:334] "Generic (PLEG): container finished" podID="8ff76202-bd79-414d-9a90-2535a8966f72" containerID="7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0" exitCode=0 Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.819951 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerDied","Data":"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0"} Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.819982 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gnfpf" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.819998 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gnfpf" event={"ID":"8ff76202-bd79-414d-9a90-2535a8966f72","Type":"ContainerDied","Data":"2263423ddf34860197b93eff42e73acf479ca52b3036bf9d249bb033043fd074"} Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.820017 4813 scope.go:117] "RemoveContainer" containerID="7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.835212 4813 scope.go:117] "RemoveContainer" containerID="d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.863549 4813 scope.go:117] "RemoveContainer" containerID="52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.877878 4813 scope.go:117] "RemoveContainer" containerID="7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0" Jan 29 17:17:41 crc kubenswrapper[4813]: E0129 17:17:41.878306 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0\": container with ID starting with 7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0 not found: ID does not exist" containerID="7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.878417 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0"} err="failed to get container status \"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0\": rpc error: code = NotFound desc = could not find container \"7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0\": container with ID starting with 7d6c960f23f06d2a5ba2c2c69941e5d9420c109caa50739f1233e5e0b71e0ef0 not found: ID does not exist" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.878497 4813 scope.go:117] "RemoveContainer" containerID="d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1" Jan 29 17:17:41 crc kubenswrapper[4813]: E0129 17:17:41.879001 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1\": container with ID starting with d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1 not found: ID does not exist" containerID="d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.879039 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1"} err="failed to get container status \"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1\": rpc error: code = NotFound desc = could not find container \"d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1\": container with ID starting with d2a1a2f200a9f427476fd5eec0811415244fe8e089038b65aede1ec667565eb1 not found: ID does not exist" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.879067 4813 scope.go:117] "RemoveContainer" containerID="52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d" Jan 29 17:17:41 crc kubenswrapper[4813]: E0129 17:17:41.879399 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d\": container with ID starting with 52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d not found: ID does not exist" containerID="52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d" Jan 29 17:17:41 crc kubenswrapper[4813]: I0129 17:17:41.879482 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d"} err="failed to get container status \"52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d\": rpc error: code = NotFound desc = could not find container \"52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d\": container with ID starting with 52fd0d62c18278b4d7fce41b0e4c9af485d5627bdf8916466ee811ad0b106b2d not found: ID does not exist" Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.022972 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ff76202-bd79-414d-9a90-2535a8966f72" (UID: "8ff76202-bd79-414d-9a90-2535a8966f72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.070034 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff76202-bd79-414d-9a90-2535a8966f72-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.166490 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.174634 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gnfpf"] Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.239433 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:17:42 crc kubenswrapper[4813]: E0129 17:17:42.239676 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:17:42 crc kubenswrapper[4813]: I0129 17:17:42.249937 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" path="/var/lib/kubelet/pods/8ff76202-bd79-414d-9a90-2535a8966f72/volumes" Jan 29 17:17:55 crc kubenswrapper[4813]: I0129 17:17:55.239490 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:17:55 crc kubenswrapper[4813]: E0129 17:17:55.240321 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:18:10 crc kubenswrapper[4813]: I0129 17:18:10.239345 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:18:10 crc kubenswrapper[4813]: E0129 17:18:10.240104 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:18:21 crc kubenswrapper[4813]: I0129 17:18:21.239821 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:18:21 crc kubenswrapper[4813]: E0129 17:18:21.240652 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:18:34 crc kubenswrapper[4813]: I0129 17:18:34.239998 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:18:34 crc kubenswrapper[4813]: E0129 17:18:34.240819 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:18:46 crc kubenswrapper[4813]: I0129 17:18:46.239675 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:18:46 crc kubenswrapper[4813]: E0129 17:18:46.240442 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:19:00 crc kubenswrapper[4813]: I0129 17:19:00.240296 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:19:00 crc kubenswrapper[4813]: E0129 17:19:00.240779 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:19:15 crc kubenswrapper[4813]: I0129 17:19:15.240004 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:19:15 crc kubenswrapper[4813]: E0129 17:19:15.242009 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:19:29 crc kubenswrapper[4813]: I0129 17:19:29.239251 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:19:29 crc kubenswrapper[4813]: E0129 17:19:29.240039 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.076476 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:30 crc kubenswrapper[4813]: E0129 17:19:30.076745 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="extract-utilities" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.076757 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="extract-utilities" Jan 29 17:19:30 crc kubenswrapper[4813]: E0129 17:19:30.076775 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="registry-server" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.076782 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="registry-server" Jan 29 17:19:30 crc kubenswrapper[4813]: E0129 17:19:30.076798 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="extract-content" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.076805 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="extract-content" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.076945 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff76202-bd79-414d-9a90-2535a8966f72" containerName="registry-server" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.078002 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.096172 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.254489 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.254896 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.254933 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdhm\" (UniqueName: \"kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.355967 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.356062 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.356129 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdhm\" (UniqueName: \"kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.356546 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.356822 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.386485 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdhm\" (UniqueName: \"kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm\") pod \"certified-operators-bj4qx\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.395932 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:30 crc kubenswrapper[4813]: I0129 17:19:30.877385 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:31 crc kubenswrapper[4813]: I0129 17:19:31.677407 4813 generic.go:334] "Generic (PLEG): container finished" podID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerID="927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17" exitCode=0 Jan 29 17:19:31 crc kubenswrapper[4813]: I0129 17:19:31.677481 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerDied","Data":"927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17"} Jan 29 17:19:31 crc kubenswrapper[4813]: I0129 17:19:31.677523 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerStarted","Data":"ad06c161f779f0d8f8352774a9a2a1a3caf6129556afcbbd1b9eaa7e3de1781a"} Jan 29 17:19:32 crc kubenswrapper[4813]: I0129 17:19:32.686470 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerStarted","Data":"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f"} Jan 29 17:19:33 crc kubenswrapper[4813]: I0129 17:19:33.699883 4813 generic.go:334] "Generic (PLEG): container finished" podID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerID="258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f" exitCode=0 Jan 29 17:19:33 crc kubenswrapper[4813]: I0129 17:19:33.699991 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerDied","Data":"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f"} Jan 29 17:19:34 crc kubenswrapper[4813]: I0129 17:19:34.710731 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerStarted","Data":"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a"} Jan 29 17:19:34 crc kubenswrapper[4813]: I0129 17:19:34.737232 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bj4qx" podStartSLOduration=2.243402169 podStartE2EDuration="4.737213398s" podCreationTimestamp="2026-01-29 17:19:30 +0000 UTC" firstStartedPulling="2026-01-29 17:19:31.681547347 +0000 UTC m=+3024.168750563" lastFinishedPulling="2026-01-29 17:19:34.175358576 +0000 UTC m=+3026.662561792" observedRunningTime="2026-01-29 17:19:34.732375851 +0000 UTC m=+3027.219579067" watchObservedRunningTime="2026-01-29 17:19:34.737213398 +0000 UTC m=+3027.224416624" Jan 29 17:19:40 crc kubenswrapper[4813]: I0129 17:19:40.397542 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:40 crc kubenswrapper[4813]: I0129 17:19:40.398675 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:40 crc kubenswrapper[4813]: I0129 17:19:40.441265 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:40 crc kubenswrapper[4813]: I0129 17:19:40.794929 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:40 crc kubenswrapper[4813]: I0129 17:19:40.838252 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:42 crc kubenswrapper[4813]: I0129 17:19:42.240028 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:19:42 crc kubenswrapper[4813]: E0129 17:19:42.240583 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:19:42 crc kubenswrapper[4813]: I0129 17:19:42.764214 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bj4qx" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="registry-server" containerID="cri-o://9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a" gracePeriod=2 Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.675099 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.767920 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcdhm\" (UniqueName: \"kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm\") pod \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.768078 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities\") pod \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.768154 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content\") pod \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\" (UID: \"2deb83d0-e8b8-4df8-96ba-ae019d8debd8\") " Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.769674 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities" (OuterVolumeSpecName: "utilities") pod "2deb83d0-e8b8-4df8-96ba-ae019d8debd8" (UID: "2deb83d0-e8b8-4df8-96ba-ae019d8debd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777088 4813 generic.go:334] "Generic (PLEG): container finished" podID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerID="9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a" exitCode=0 Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777150 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerDied","Data":"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a"} Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777178 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj4qx" event={"ID":"2deb83d0-e8b8-4df8-96ba-ae019d8debd8","Type":"ContainerDied","Data":"ad06c161f779f0d8f8352774a9a2a1a3caf6129556afcbbd1b9eaa7e3de1781a"} Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777181 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm" (OuterVolumeSpecName: "kube-api-access-pcdhm") pod "2deb83d0-e8b8-4df8-96ba-ae019d8debd8" (UID: "2deb83d0-e8b8-4df8-96ba-ae019d8debd8"). InnerVolumeSpecName "kube-api-access-pcdhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777194 4813 scope.go:117] "RemoveContainer" containerID="9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.777251 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj4qx" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.814084 4813 scope.go:117] "RemoveContainer" containerID="258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.835272 4813 scope.go:117] "RemoveContainer" containerID="927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.857792 4813 scope.go:117] "RemoveContainer" containerID="9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a" Jan 29 17:19:43 crc kubenswrapper[4813]: E0129 17:19:43.858353 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a\": container with ID starting with 9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a not found: ID does not exist" containerID="9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.858409 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a"} err="failed to get container status \"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a\": rpc error: code = NotFound desc = could not find container \"9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a\": container with ID starting with 9f5a003682e5aaf9790b3490155468b4493a4610f41599a4b4420eff4251a04a not found: ID does not exist" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.858441 4813 scope.go:117] "RemoveContainer" containerID="258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f" Jan 29 17:19:43 crc kubenswrapper[4813]: E0129 17:19:43.858924 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f\": container with ID starting with 258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f not found: ID does not exist" containerID="258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.858977 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f"} err="failed to get container status \"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f\": rpc error: code = NotFound desc = could not find container \"258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f\": container with ID starting with 258db3fe843070de56b078bccdcf3fb5f83e596cb6c8d1854b77fb5c1174623f not found: ID does not exist" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.859009 4813 scope.go:117] "RemoveContainer" containerID="927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17" Jan 29 17:19:43 crc kubenswrapper[4813]: E0129 17:19:43.859565 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17\": container with ID starting with 927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17 not found: ID does not exist" containerID="927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.859616 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17"} err="failed to get container status \"927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17\": rpc error: code = NotFound desc = could not find container \"927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17\": container with ID starting with 927fd6f73f7a12ed50575f46eb7120bb06920d6694c309c2911ebfc887dbcf17 not found: ID does not exist" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.870307 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:43 crc kubenswrapper[4813]: I0129 17:19:43.870383 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcdhm\" (UniqueName: \"kubernetes.io/projected/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-kube-api-access-pcdhm\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:44 crc kubenswrapper[4813]: I0129 17:19:44.606770 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2deb83d0-e8b8-4df8-96ba-ae019d8debd8" (UID: "2deb83d0-e8b8-4df8-96ba-ae019d8debd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:19:44 crc kubenswrapper[4813]: I0129 17:19:44.706581 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2deb83d0-e8b8-4df8-96ba-ae019d8debd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:19:44 crc kubenswrapper[4813]: I0129 17:19:44.714476 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:44 crc kubenswrapper[4813]: I0129 17:19:44.720744 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bj4qx"] Jan 29 17:19:46 crc kubenswrapper[4813]: I0129 17:19:46.247326 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" path="/var/lib/kubelet/pods/2deb83d0-e8b8-4df8-96ba-ae019d8debd8/volumes" Jan 29 17:19:57 crc kubenswrapper[4813]: I0129 17:19:57.240098 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:19:57 crc kubenswrapper[4813]: E0129 17:19:57.241205 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:20:11 crc kubenswrapper[4813]: I0129 17:20:11.239677 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:20:11 crc kubenswrapper[4813]: E0129 17:20:11.240468 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:20:22 crc kubenswrapper[4813]: I0129 17:20:22.239419 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:20:22 crc kubenswrapper[4813]: E0129 17:20:22.240347 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:20:34 crc kubenswrapper[4813]: I0129 17:20:34.239809 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:20:34 crc kubenswrapper[4813]: E0129 17:20:34.240629 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:20:47 crc kubenswrapper[4813]: I0129 17:20:47.240194 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:20:47 crc kubenswrapper[4813]: E0129 17:20:47.241002 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:21:00 crc kubenswrapper[4813]: I0129 17:21:00.240344 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:21:00 crc kubenswrapper[4813]: E0129 17:21:00.240933 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:21:15 crc kubenswrapper[4813]: I0129 17:21:15.240645 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:21:15 crc kubenswrapper[4813]: E0129 17:21:15.241331 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:21:28 crc kubenswrapper[4813]: I0129 17:21:28.245316 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:21:28 crc kubenswrapper[4813]: E0129 17:21:28.247286 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:21:40 crc kubenswrapper[4813]: I0129 17:21:40.239559 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:21:40 crc kubenswrapper[4813]: E0129 17:21:40.240408 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:21:51 crc kubenswrapper[4813]: I0129 17:21:51.239040 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:21:51 crc kubenswrapper[4813]: E0129 17:21:51.239758 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:22:03 crc kubenswrapper[4813]: I0129 17:22:03.239746 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:22:03 crc kubenswrapper[4813]: E0129 17:22:03.240477 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:22:17 crc kubenswrapper[4813]: I0129 17:22:17.239392 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:22:17 crc kubenswrapper[4813]: E0129 17:22:17.240061 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:22:32 crc kubenswrapper[4813]: I0129 17:22:32.239722 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:22:32 crc kubenswrapper[4813]: I0129 17:22:32.894721 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8"} Jan 29 17:25:00 crc kubenswrapper[4813]: I0129 17:25:00.239980 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:25:00 crc kubenswrapper[4813]: I0129 17:25:00.240812 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:25:30 crc kubenswrapper[4813]: I0129 17:25:30.240651 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:25:30 crc kubenswrapper[4813]: I0129 17:25:30.241232 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.239963 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.240731 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.257979 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.258843 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.258945 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8" gracePeriod=600 Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.432171 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8" exitCode=0 Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.432235 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8"} Jan 29 17:26:00 crc kubenswrapper[4813]: I0129 17:26:00.432527 4813 scope.go:117] "RemoveContainer" containerID="261e5e96d16ac3b838bb5acf39a6781bee9a977d2e7784735f531db1dc753095" Jan 29 17:26:01 crc kubenswrapper[4813]: I0129 17:26:01.442028 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerStarted","Data":"f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf"} Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.558651 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:26:28 crc kubenswrapper[4813]: E0129 17:26:28.560766 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="registry-server" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.560877 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="registry-server" Jan 29 17:26:28 crc kubenswrapper[4813]: E0129 17:26:28.560961 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="extract-content" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.561040 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="extract-content" Jan 29 17:26:28 crc kubenswrapper[4813]: E0129 17:26:28.561147 4813 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="extract-utilities" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.561261 4813 state_mem.go:107] "Deleted CPUSet assignment" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="extract-utilities" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.561556 4813 memory_manager.go:354] "RemoveStaleState removing state" podUID="2deb83d0-e8b8-4df8-96ba-ae019d8debd8" containerName="registry-server" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.562913 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.577556 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.713970 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.714522 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7mr\" (UniqueName: \"kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.714599 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.816726 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.816802 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l7mr\" (UniqueName: \"kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.816859 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.817528 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.817534 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.837146 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l7mr\" (UniqueName: \"kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr\") pod \"community-operators-vcx7d\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:28 crc kubenswrapper[4813]: I0129 17:26:28.880566 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:26:29 crc kubenswrapper[4813]: W0129 17:26:29.390631 4813 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod483ea71b_33ae_40f2_9120_3b22c0f6ff21.slice/crio-cd7611a86e0ecf3a4b25edd2f9759a6c5f58dd33c5d7b964f629d0324fa33f43 WatchSource:0}: Error finding container cd7611a86e0ecf3a4b25edd2f9759a6c5f58dd33c5d7b964f629d0324fa33f43: Status 404 returned error can't find the container with id cd7611a86e0ecf3a4b25edd2f9759a6c5f58dd33c5d7b964f629d0324fa33f43 Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.390954 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.545252 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.547267 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.560041 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.628299 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.628426 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5nq\" (UniqueName: \"kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.628472 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.646788 4813 generic.go:334] "Generic (PLEG): container finished" podID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" containerID="7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16" exitCode=0 Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.646853 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerDied","Data":"7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16"} Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.646902 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerStarted","Data":"cd7611a86e0ecf3a4b25edd2f9759a6c5f58dd33c5d7b964f629d0324fa33f43"} Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.649152 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.729807 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5nq\" (UniqueName: \"kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.729944 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.730036 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.730538 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.730545 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.746941 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5nq\" (UniqueName: \"kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq\") pod \"redhat-marketplace-4sdxc\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:29 crc kubenswrapper[4813]: E0129 17:26:29.784618 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:26:29 crc kubenswrapper[4813]: E0129 17:26:29.784745 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l7mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vcx7d_openshift-marketplace(483ea71b-33ae-40f2-9120-3b22c0f6ff21): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:26:29 crc kubenswrapper[4813]: E0129 17:26:29.785800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:26:29 crc kubenswrapper[4813]: I0129 17:26:29.925972 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:26:30 crc kubenswrapper[4813]: I0129 17:26:30.165884 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:26:30 crc kubenswrapper[4813]: I0129 17:26:30.657268 4813 generic.go:334] "Generic (PLEG): container finished" podID="11f96ea5-984e-41c6-ad14-0ce689ec882f" containerID="de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500" exitCode=0 Jan 29 17:26:30 crc kubenswrapper[4813]: I0129 17:26:30.657359 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerDied","Data":"de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500"} Jan 29 17:26:30 crc kubenswrapper[4813]: I0129 17:26:30.657391 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerStarted","Data":"502e0960a0e3232b6d9339e410f807f0a92d856ca14751dbed2f303995df0264"} Jan 29 17:26:30 crc kubenswrapper[4813]: E0129 17:26:30.659423 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:26:30 crc kubenswrapper[4813]: E0129 17:26:30.776461 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:26:30 crc kubenswrapper[4813]: E0129 17:26:30.776623 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4sdxc_openshift-marketplace(11f96ea5-984e-41c6-ad14-0ce689ec882f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:26:30 crc kubenswrapper[4813]: E0129 17:26:30.778294 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:26:31 crc kubenswrapper[4813]: E0129 17:26:31.665549 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:26:44 crc kubenswrapper[4813]: E0129 17:26:44.366626 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:26:44 crc kubenswrapper[4813]: E0129 17:26:44.367925 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l7mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vcx7d_openshift-marketplace(483ea71b-33ae-40f2-9120-3b22c0f6ff21): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:26:44 crc kubenswrapper[4813]: E0129 17:26:44.369208 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:26:47 crc kubenswrapper[4813]: E0129 17:26:47.362049 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:26:47 crc kubenswrapper[4813]: E0129 17:26:47.362488 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4sdxc_openshift-marketplace(11f96ea5-984e-41c6-ad14-0ce689ec882f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:26:47 crc kubenswrapper[4813]: E0129 17:26:47.363697 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:26:56 crc kubenswrapper[4813]: E0129 17:26:56.241142 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:27:00 crc kubenswrapper[4813]: E0129 17:27:00.242066 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:27:11 crc kubenswrapper[4813]: E0129 17:27:11.370419 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:27:11 crc kubenswrapper[4813]: E0129 17:27:11.371395 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l7mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vcx7d_openshift-marketplace(483ea71b-33ae-40f2-9120-3b22c0f6ff21): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:27:11 crc kubenswrapper[4813]: E0129 17:27:11.372720 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:27:13 crc kubenswrapper[4813]: E0129 17:27:13.359632 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:27:13 crc kubenswrapper[4813]: E0129 17:27:13.360075 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4sdxc_openshift-marketplace(11f96ea5-984e-41c6-ad14-0ce689ec882f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:27:13 crc kubenswrapper[4813]: E0129 17:27:13.361374 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.453322 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lfj4c/must-gather-c8855"] Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.454896 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.456697 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lfj4c"/"openshift-service-ca.crt" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.457397 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lfj4c"/"kube-root-ca.crt" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.458401 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lfj4c"/"default-dockercfg-ksv4f" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.484020 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lfj4c/must-gather-c8855"] Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.649397 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.652494 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwrp6\" (UniqueName: \"kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.754448 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwrp6\" (UniqueName: \"kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.754553 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.755088 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:16 crc kubenswrapper[4813]: I0129 17:27:16.800440 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwrp6\" (UniqueName: \"kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6\") pod \"must-gather-c8855\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:17 crc kubenswrapper[4813]: I0129 17:27:17.083569 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:27:17 crc kubenswrapper[4813]: I0129 17:27:17.510530 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lfj4c/must-gather-c8855"] Jan 29 17:27:17 crc kubenswrapper[4813]: I0129 17:27:17.979688 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lfj4c/must-gather-c8855" event={"ID":"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb","Type":"ContainerStarted","Data":"44f9b4d29cee0a6e2936d1047514f1f5e6eca2c01c8c61ab98cfeda6d0be9798"} Jan 29 17:27:24 crc kubenswrapper[4813]: I0129 17:27:24.027827 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lfj4c/must-gather-c8855" event={"ID":"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb","Type":"ContainerStarted","Data":"be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27"} Jan 29 17:27:24 crc kubenswrapper[4813]: I0129 17:27:24.028367 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lfj4c/must-gather-c8855" event={"ID":"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb","Type":"ContainerStarted","Data":"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac"} Jan 29 17:27:24 crc kubenswrapper[4813]: I0129 17:27:24.045826 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lfj4c/must-gather-c8855" podStartSLOduration=2.468580153 podStartE2EDuration="8.045801418s" podCreationTimestamp="2026-01-29 17:27:16 +0000 UTC" firstStartedPulling="2026-01-29 17:27:17.517794389 +0000 UTC m=+3490.004997605" lastFinishedPulling="2026-01-29 17:27:23.095015654 +0000 UTC m=+3495.582218870" observedRunningTime="2026-01-29 17:27:24.044992505 +0000 UTC m=+3496.532195721" watchObservedRunningTime="2026-01-29 17:27:24.045801418 +0000 UTC m=+3496.533004634" Jan 29 17:27:24 crc kubenswrapper[4813]: E0129 17:27:24.242450 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:27:27 crc kubenswrapper[4813]: E0129 17:27:27.241745 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:27:35 crc kubenswrapper[4813]: E0129 17:27:35.241636 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:27:42 crc kubenswrapper[4813]: E0129 17:27:42.242313 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:27:46 crc kubenswrapper[4813]: E0129 17:27:46.242016 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:27:55 crc kubenswrapper[4813]: E0129 17:27:55.362525 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:27:55 crc kubenswrapper[4813]: E0129 17:27:55.363155 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4sdxc_openshift-marketplace(11f96ea5-984e-41c6-ad14-0ce689ec882f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:27:55 crc kubenswrapper[4813]: E0129 17:27:55.364281 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:27:58 crc kubenswrapper[4813]: E0129 17:27:58.369558 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:27:58 crc kubenswrapper[4813]: E0129 17:27:58.370372 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l7mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vcx7d_openshift-marketplace(483ea71b-33ae-40f2-9120-3b22c0f6ff21): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:27:58 crc kubenswrapper[4813]: E0129 17:27:58.371591 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:28:00 crc kubenswrapper[4813]: I0129 17:28:00.246358 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:28:00 crc kubenswrapper[4813]: I0129 17:28:00.246758 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:28:07 crc kubenswrapper[4813]: E0129 17:28:07.243312 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:28:12 crc kubenswrapper[4813]: E0129 17:28:12.242191 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:28:20 crc kubenswrapper[4813]: I0129 17:28:20.894462 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/util/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.071776 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/pull/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.106364 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/pull/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.109969 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/util/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.260568 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/util/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.278030 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/pull/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.306445 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1994j5_25bf8378-ee1c-4a1e-8a87-2c3efd6ae106/extract/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.512771 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-99psd_d7c7a81c-3f15-493f-b7cd-97486af5c4a8/manager/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.559240 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-g5vs4_a5517fc3-4c76-4ecc-bfca-240cbb5877af/manager/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.675403 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-fq8n9_73e44ad8-56ed-43ed-8eed-d466ad56e480/manager/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.768572 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-txhlr_b7614801-3c1c-43c9-b270-8121ef10bb6f/manager/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.858320 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-qrrsx_e5d8b964-7415-4684-85a2-2eb6ba75acb6/manager/0.log" Jan 29 17:28:21 crc kubenswrapper[4813]: I0129 17:28:21.943496 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-wmqxl_74c64061-8f84-45f4-813c-028126b44630/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.180730 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-gttcx_8fb27cac-ef23-4b14-bee3-7b69233d9cdc/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: E0129 17:28:22.242247 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.308199 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-wmz96_239ca114-1b8f-447f-968e-83c058fb678e/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.403858 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-gv495_f6c9bd68-507b-4ecc-a1cd-e88ab1e96727/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.521669 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-94b2b_b24c6187-bc48-4261-b142-2269f470e58a/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.748215 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-h6hbr_4fa45b77-7f33-448e-9855-5f9f5117bf82/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.763458 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-jssxt_53f25c98-8891-4e13-afaa-4cbae4bf1c7e/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.983209 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-5mtzp_b67ed668-3ff7-416c-8014-1a5f9668b54c/manager/0.log" Jan 29 17:28:22 crc kubenswrapper[4813]: I0129 17:28:22.985822 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-dk624_79ec4543-cad4-4120-addd-0aeb8756eaaf/manager/0.log" Jan 29 17:28:23 crc kubenswrapper[4813]: I0129 17:28:23.117529 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc7vzs9r_30cb06d1-8e0a-4702-bd9c-42561afd684c/manager/0.log" Jan 29 17:28:23 crc kubenswrapper[4813]: I0129 17:28:23.300463 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-r6jhs_8ea5ea66-c7ea-4536-bd82-1e7ea5ef1b58/operator/0.log" Jan 29 17:28:23 crc kubenswrapper[4813]: I0129 17:28:23.562402 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-b9hzj_ee67a921-286b-45f7-b579-0c7463587f13/registry-server/0.log" Jan 29 17:28:23 crc kubenswrapper[4813]: I0129 17:28:23.767480 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-mhdmm_d81220d4-722d-4fc5-9626-826d1eccc841/manager/0.log" Jan 29 17:28:23 crc kubenswrapper[4813]: I0129 17:28:23.834895 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-8k6nd_6ca5d465-220c-4c51-a5be-c304ddec9e48/manager/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.051200 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hvxtm_1f535307-b492-4576-875c-387684d62a42/operator/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.064729 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-k4455_69b5b8e5-1a7b-4b24-b53b-5985a23a1f5b/manager/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.201219 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-rhsqf_c5267884-745c-47aa-a389-87da2899b706/manager/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.290433 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-zhpjf_123e5d6b-93c9-452c-9c77-81333f65487d/manager/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.381090 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-p9kz6_c709a691-8375-4f1f-8552-12e96c2da0a8/manager/0.log" Jan 29 17:28:24 crc kubenswrapper[4813]: I0129 17:28:24.429184 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-wsgqh_a3fcd8e8-4123-4172-a3f0-86696ee14d71/manager/0.log" Jan 29 17:28:26 crc kubenswrapper[4813]: E0129 17:28:26.241016 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:28:30 crc kubenswrapper[4813]: I0129 17:28:30.240139 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:28:30 crc kubenswrapper[4813]: I0129 17:28:30.240511 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:28:34 crc kubenswrapper[4813]: E0129 17:28:34.241638 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:28:41 crc kubenswrapper[4813]: E0129 17:28:41.241477 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:28:41 crc kubenswrapper[4813]: I0129 17:28:41.381441 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f299f_7f7cd5f7-bcff-4bc3-a908-19e62cea720c/control-plane-machine-set-operator/0.log" Jan 29 17:28:41 crc kubenswrapper[4813]: I0129 17:28:41.538587 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gzgqp_5227068b-89fa-4779-b1e5-4f3fab7814e9/kube-rbac-proxy/0.log" Jan 29 17:28:41 crc kubenswrapper[4813]: I0129 17:28:41.578332 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gzgqp_5227068b-89fa-4779-b1e5-4f3fab7814e9/machine-api-operator/0.log" Jan 29 17:28:46 crc kubenswrapper[4813]: E0129 17:28:46.241046 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:28:54 crc kubenswrapper[4813]: I0129 17:28:54.108410 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-h8ndz_220240d2-4982-4884-80eb-09b077e332a1/cert-manager-controller/0.log" Jan 29 17:28:54 crc kubenswrapper[4813]: I0129 17:28:54.115662 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-h8ndz_220240d2-4982-4884-80eb-09b077e332a1/cert-manager-controller/1.log" Jan 29 17:28:54 crc kubenswrapper[4813]: I0129 17:28:54.351897 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-t84sv_e70ce70a-9390-42de-828e-054a8b3e1a4a/cert-manager-cainjector/0.log" Jan 29 17:28:54 crc kubenswrapper[4813]: I0129 17:28:54.487609 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-xcvwg_9f660d36-6472-483e-96b9-10742c0bdc49/cert-manager-webhook/0.log" Jan 29 17:28:56 crc kubenswrapper[4813]: E0129 17:28:56.241136 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:28:58 crc kubenswrapper[4813]: E0129 17:28:58.245563 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.240698 4813 patch_prober.go:28] interesting pod/machine-config-daemon-r269r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.241064 4813 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.248331 4813 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-r269r" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.248929 4813 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf"} pod="openshift-machine-config-operator/machine-config-daemon-r269r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.248991 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" containerName="machine-config-daemon" containerID="cri-o://f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" gracePeriod=600 Jan 29 17:29:00 crc kubenswrapper[4813]: E0129 17:29:00.386058 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.652326 4813 generic.go:334] "Generic (PLEG): container finished" podID="71cdf350-59d3-4d6f-8995-173528429b59" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" exitCode=0 Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.652388 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-r269r" event={"ID":"71cdf350-59d3-4d6f-8995-173528429b59","Type":"ContainerDied","Data":"f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf"} Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.652422 4813 scope.go:117] "RemoveContainer" containerID="bbf1f35f8b3a79a725a75a727ff667f24d8f518dc16c54ac51858cc4cacb36d8" Jan 29 17:29:00 crc kubenswrapper[4813]: I0129 17:29:00.653178 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:29:00 crc kubenswrapper[4813]: E0129 17:29:00.653763 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:05 crc kubenswrapper[4813]: I0129 17:29:05.913827 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-fvjfk_eabd7875-0bba-4ea2-9fec-b87ddb267bd3/nmstate-console-plugin/0.log" Jan 29 17:29:06 crc kubenswrapper[4813]: I0129 17:29:06.098885 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lwkbj_008d4499-bdca-4247-9270-0f178a379a30/nmstate-handler/0.log" Jan 29 17:29:06 crc kubenswrapper[4813]: I0129 17:29:06.157379 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6rnpf_b90fa914-925f-456e-8466-53c5bb0c4464/kube-rbac-proxy/0.log" Jan 29 17:29:06 crc kubenswrapper[4813]: I0129 17:29:06.240340 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6rnpf_b90fa914-925f-456e-8466-53c5bb0c4464/nmstate-metrics/0.log" Jan 29 17:29:06 crc kubenswrapper[4813]: I0129 17:29:06.283128 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-wl6z6_065cce61-9285-4d2c-846f-8c683898edd2/nmstate-operator/0.log" Jan 29 17:29:06 crc kubenswrapper[4813]: I0129 17:29:06.412398 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-xqdsb_9b92c1e6-a66c-4d10-bf5d-fd8ccd18a7a5/nmstate-webhook/0.log" Jan 29 17:29:08 crc kubenswrapper[4813]: E0129 17:29:08.247503 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:29:13 crc kubenswrapper[4813]: E0129 17:29:13.242208 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:29:15 crc kubenswrapper[4813]: I0129 17:29:15.239288 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:29:15 crc kubenswrapper[4813]: E0129 17:29:15.239929 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:19 crc kubenswrapper[4813]: E0129 17:29:19.362759 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 17:29:19 crc kubenswrapper[4813]: E0129 17:29:19.363422 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5l7mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vcx7d_openshift-marketplace(483ea71b-33ae-40f2-9120-3b22c0f6ff21): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:29:19 crc kubenswrapper[4813]: E0129 17:29:19.364602 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:29:27 crc kubenswrapper[4813]: E0129 17:29:27.415599 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 17:29:27 crc kubenswrapper[4813]: E0129 17:29:27.416257 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4sdxc_openshift-marketplace(11f96ea5-984e-41c6-ad14-0ce689ec882f): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:29:27 crc kubenswrapper[4813]: E0129 17:29:27.417415 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:29:29 crc kubenswrapper[4813]: I0129 17:29:29.239403 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:29:29 crc kubenswrapper[4813]: E0129 17:29:29.239922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:29 crc kubenswrapper[4813]: I0129 17:29:29.913151 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6nftd_98ca8c84-696a-4a20-a620-beebb81a4d9b/kube-rbac-proxy/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.090402 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-sswwk_e0a80701-75b8-47bd-a5ca-a17d911b3d05/frr-k8s-webhook-server/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.227801 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6nftd_98ca8c84-696a-4a20-a620-beebb81a4d9b/controller/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.282335 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-frr-files/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.409218 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-reloader/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.416239 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-frr-files/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.462126 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-metrics/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.508095 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-reloader/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.648302 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-reloader/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.648537 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-frr-files/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.649967 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-metrics/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.705637 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-metrics/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.882264 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-frr-files/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.888448 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-reloader/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.923540 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/controller/0.log" Jan 29 17:29:30 crc kubenswrapper[4813]: I0129 17:29:30.956009 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/cp-metrics/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.117363 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/kube-rbac-proxy/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.135829 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/frr-metrics/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.145422 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/kube-rbac-proxy-frr/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.408710 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/reloader/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.421039 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68bd5b494f-lprbc_4f2eda5a-821a-4563-b99b-01b197b48993/manager/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.594004 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-769db779b-hnn5b_d1f14ce8-30ad-41fc-a8c6-99a0b06b35e6/webhook-server/0.log" Jan 29 17:29:31 crc kubenswrapper[4813]: I0129 17:29:31.710535 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nf7rq_2b3d4378-fe4d-4a46-8a43-c66518db31e0/kube-rbac-proxy/0.log" Jan 29 17:29:32 crc kubenswrapper[4813]: E0129 17:29:32.241427 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:29:32 crc kubenswrapper[4813]: I0129 17:29:32.267163 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nf7rq_2b3d4378-fe4d-4a46-8a43-c66518db31e0/speaker/0.log" Jan 29 17:29:32 crc kubenswrapper[4813]: I0129 17:29:32.461514 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xfmps_6df3ba9c-ad21-4805-bee6-eaa997d79d87/frr/0.log" Jan 29 17:29:40 crc kubenswrapper[4813]: I0129 17:29:40.239632 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:29:40 crc kubenswrapper[4813]: E0129 17:29:40.240146 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.210270 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/util/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: E0129 17:29:43.243236 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.456698 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/pull/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.484351 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/util/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.506910 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/pull/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.670613 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/util/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.671041 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/extract/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.676299 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcc9xfs_0d047ac7-886a-419d-bbd1-42a1ee103641/pull/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.828831 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/util/0.log" Jan 29 17:29:43 crc kubenswrapper[4813]: I0129 17:29:43.982070 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/util/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.179140 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.179237 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.348223 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/extract/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.357492 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.358632 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71392xfd_aefb8fdb-ae07-4844-b7de-bf30c35e65d1/util/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.529970 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/util/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.713755 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/util/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.741751 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.771319 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.914571 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/util/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.945535 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/pull/0.log" Jan 29 17:29:44 crc kubenswrapper[4813]: I0129 17:29:44.954377 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vm4n5_f435c00e-812d-4034-9652-7535d6f694cd/extract/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.080981 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-utilities/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.228097 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-content/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.240210 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-utilities/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.287709 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-content/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.395687 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-utilities/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.443282 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/extract-content/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.600485 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-utilities/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.773856 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-utilities/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.831370 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-content/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.837850 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-content/0.log" Jan 29 17:29:45 crc kubenswrapper[4813]: I0129 17:29:45.942268 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-m6m45_4e573ebb-94a9-440d-bba1-58251a12dfb9/registry-server/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.025785 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-content/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.029684 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/extract-utilities/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.225915 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vcx7d_483ea71b-33ae-40f2-9120-3b22c0f6ff21/extract-utilities/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: E0129 17:29:46.243729 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.579491 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vcx7d_483ea71b-33ae-40f2-9120-3b22c0f6ff21/extract-utilities/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.606276 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c24v5_3bd5bef2-b1f5-4fa4-bf92-ea5a91f007b1/registry-server/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.770233 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vcx7d_483ea71b-33ae-40f2-9120-3b22c0f6ff21/extract-utilities/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.900451 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4h4mv_5365bd03-836d-4231-9dde-3b2a3c201e2d/marketplace-operator/0.log" Jan 29 17:29:46 crc kubenswrapper[4813]: I0129 17:29:46.975666 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4sdxc_11f96ea5-984e-41c6-ad14-0ce689ec882f/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.099653 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4sdxc_11f96ea5-984e-41c6-ad14-0ce689ec882f/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.492215 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4sdxc_11f96ea5-984e-41c6-ad14-0ce689ec882f/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.507275 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.695991 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.704534 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-content/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.728549 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-content/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.892171 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-content/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.898911 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/extract-utilities/0.log" Jan 29 17:29:47 crc kubenswrapper[4813]: I0129 17:29:47.908193 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-utilities/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.034101 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p9jg2_b560cafb-e64c-45b0-912d-1d086bfb8d20/registry-server/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.137982 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-content/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.142785 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-content/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.145147 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-utilities/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.342278 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-utilities/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.345195 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/extract-content/0.log" Jan 29 17:29:48 crc kubenswrapper[4813]: I0129 17:29:48.744166 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7hpkl_3d9a67ec-1fb1-4442-99b8-c7ee1b729e23/registry-server/0.log" Jan 29 17:29:51 crc kubenswrapper[4813]: I0129 17:29:51.239879 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:29:51 crc kubenswrapper[4813]: E0129 17:29:51.240455 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.074184 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.076200 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.084654 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.207195 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.207284 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.207348 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b87k7\" (UniqueName: \"kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.308821 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.309140 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.309258 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b87k7\" (UniqueName: \"kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.309459 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.309724 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.332270 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b87k7\" (UniqueName: \"kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7\") pod \"certified-operators-6f6ng\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.396909 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.871370 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:29:54 crc kubenswrapper[4813]: I0129 17:29:54.995980 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerStarted","Data":"d03b5bc0d2bb2051dcd032c73933bd214ccdeb4cab1f710a7b36b577d0a21cff"} Jan 29 17:29:56 crc kubenswrapper[4813]: I0129 17:29:56.004150 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" containerID="7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722" exitCode=0 Jan 29 17:29:56 crc kubenswrapper[4813]: I0129 17:29:56.004202 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerDied","Data":"7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722"} Jan 29 17:29:56 crc kubenswrapper[4813]: E0129 17:29:56.132251 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:29:56 crc kubenswrapper[4813]: E0129 17:29:56.132402 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b87k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6f6ng_openshift-marketplace(9ee66b96-b49f-490d-94ca-d0c9d5736dad): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:29:56 crc kubenswrapper[4813]: E0129 17:29:56.133571 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:29:56 crc kubenswrapper[4813]: E0129 17:29:56.241757 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:29:57 crc kubenswrapper[4813]: E0129 17:29:57.010972 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:29:59 crc kubenswrapper[4813]: E0129 17:29:59.241574 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.157924 4813 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m"] Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.159356 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.161916 4813 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.162507 4813 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.170316 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m"] Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.286622 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.286733 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8sq\" (UniqueName: \"kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.286794 4813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.388508 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.389081 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.389320 4813 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8sq\" (UniqueName: \"kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.389557 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.403809 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.410684 4813 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8sq\" (UniqueName: \"kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq\") pod \"collect-profiles-29495130-h7t8m\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.489460 4813 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:00 crc kubenswrapper[4813]: I0129 17:30:00.906681 4813 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m"] Jan 29 17:30:01 crc kubenswrapper[4813]: I0129 17:30:01.033337 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" event={"ID":"3f006aaa-9091-46ef-88cf-f1c70e2620a0","Type":"ContainerStarted","Data":"39909afe738bdcccddea8f6ea3251bd8818b1c9062854ad4946779af539a7d21"} Jan 29 17:30:02 crc kubenswrapper[4813]: I0129 17:30:02.041199 4813 generic.go:334] "Generic (PLEG): container finished" podID="3f006aaa-9091-46ef-88cf-f1c70e2620a0" containerID="ece532e8af5403669af5acecf85ef6a05c3ee003df563c37d1106f1f80dabb93" exitCode=0 Jan 29 17:30:02 crc kubenswrapper[4813]: I0129 17:30:02.041238 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" event={"ID":"3f006aaa-9091-46ef-88cf-f1c70e2620a0","Type":"ContainerDied","Data":"ece532e8af5403669af5acecf85ef6a05c3ee003df563c37d1106f1f80dabb93"} Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.239942 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:30:03 crc kubenswrapper[4813]: E0129 17:30:03.240722 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.319851 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.435077 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume\") pod \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.436132 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume\") pod \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.436493 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8sq\" (UniqueName: \"kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq\") pod \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\" (UID: \"3f006aaa-9091-46ef-88cf-f1c70e2620a0\") " Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.436789 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f006aaa-9091-46ef-88cf-f1c70e2620a0" (UID: "3f006aaa-9091-46ef-88cf-f1c70e2620a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.437373 4813 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f006aaa-9091-46ef-88cf-f1c70e2620a0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.441268 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f006aaa-9091-46ef-88cf-f1c70e2620a0" (UID: "3f006aaa-9091-46ef-88cf-f1c70e2620a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.441474 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq" (OuterVolumeSpecName: "kube-api-access-kr8sq") pod "3f006aaa-9091-46ef-88cf-f1c70e2620a0" (UID: "3f006aaa-9091-46ef-88cf-f1c70e2620a0"). InnerVolumeSpecName "kube-api-access-kr8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.539418 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8sq\" (UniqueName: \"kubernetes.io/projected/3f006aaa-9091-46ef-88cf-f1c70e2620a0-kube-api-access-kr8sq\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:03 crc kubenswrapper[4813]: I0129 17:30:03.539461 4813 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f006aaa-9091-46ef-88cf-f1c70e2620a0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 17:30:04 crc kubenswrapper[4813]: I0129 17:30:04.057891 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" event={"ID":"3f006aaa-9091-46ef-88cf-f1c70e2620a0","Type":"ContainerDied","Data":"39909afe738bdcccddea8f6ea3251bd8818b1c9062854ad4946779af539a7d21"} Jan 29 17:30:04 crc kubenswrapper[4813]: I0129 17:30:04.057933 4813 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39909afe738bdcccddea8f6ea3251bd8818b1c9062854ad4946779af539a7d21" Jan 29 17:30:04 crc kubenswrapper[4813]: I0129 17:30:04.058033 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495130-h7t8m" Jan 29 17:30:04 crc kubenswrapper[4813]: I0129 17:30:04.393259 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk"] Jan 29 17:30:04 crc kubenswrapper[4813]: I0129 17:30:04.398087 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495085-4vplk"] Jan 29 17:30:06 crc kubenswrapper[4813]: I0129 17:30:06.254668 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b" path="/var/lib/kubelet/pods/05bbf5f6-761e-4e1d-8495-5d1f3ab1dd0b/volumes" Jan 29 17:30:08 crc kubenswrapper[4813]: E0129 17:30:08.391183 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:30:08 crc kubenswrapper[4813]: E0129 17:30:08.391631 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b87k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6f6ng_openshift-marketplace(9ee66b96-b49f-490d-94ca-d0c9d5736dad): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:30:08 crc kubenswrapper[4813]: E0129 17:30:08.392825 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:30:11 crc kubenswrapper[4813]: E0129 17:30:11.242922 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:30:14 crc kubenswrapper[4813]: E0129 17:30:14.242988 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:30:17 crc kubenswrapper[4813]: I0129 17:30:17.239792 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:30:17 crc kubenswrapper[4813]: E0129 17:30:17.240224 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:30:21 crc kubenswrapper[4813]: E0129 17:30:21.242454 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:30:26 crc kubenswrapper[4813]: E0129 17:30:26.245686 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:30:28 crc kubenswrapper[4813]: E0129 17:30:28.241275 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:30:28 crc kubenswrapper[4813]: I0129 17:30:28.496677 4813 scope.go:117] "RemoveContainer" containerID="1cc8dfac029bdaa7a481fadeaec6a3e80001eac67220e8dea219c8a24868b040" Jan 29 17:30:30 crc kubenswrapper[4813]: I0129 17:30:30.241158 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:30:30 crc kubenswrapper[4813]: E0129 17:30:30.241876 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:30:33 crc kubenswrapper[4813]: E0129 17:30:33.375961 4813 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 17:30:33 crc kubenswrapper[4813]: E0129 17:30:33.376670 4813 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b87k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6f6ng_openshift-marketplace(9ee66b96-b49f-490d-94ca-d0c9d5736dad): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 17:30:33 crc kubenswrapper[4813]: E0129 17:30:33.378740 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:30:39 crc kubenswrapper[4813]: E0129 17:30:39.241792 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:30:41 crc kubenswrapper[4813]: I0129 17:30:41.239626 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:30:41 crc kubenswrapper[4813]: E0129 17:30:41.239854 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:30:42 crc kubenswrapper[4813]: E0129 17:30:42.243638 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:30:44 crc kubenswrapper[4813]: E0129 17:30:44.242645 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:30:54 crc kubenswrapper[4813]: E0129 17:30:54.242805 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:30:55 crc kubenswrapper[4813]: I0129 17:30:55.239988 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:30:55 crc kubenswrapper[4813]: E0129 17:30:55.240878 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:30:55 crc kubenswrapper[4813]: E0129 17:30:55.241883 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:30:57 crc kubenswrapper[4813]: E0129 17:30:57.241277 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:30:58 crc kubenswrapper[4813]: I0129 17:30:58.463994 4813 generic.go:334] "Generic (PLEG): container finished" podID="5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" containerID="caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac" exitCode=0 Jan 29 17:30:58 crc kubenswrapper[4813]: I0129 17:30:58.464133 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lfj4c/must-gather-c8855" event={"ID":"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb","Type":"ContainerDied","Data":"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac"} Jan 29 17:30:58 crc kubenswrapper[4813]: I0129 17:30:58.464822 4813 scope.go:117] "RemoveContainer" containerID="caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac" Jan 29 17:30:59 crc kubenswrapper[4813]: I0129 17:30:59.472306 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lfj4c_must-gather-c8855_5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb/gather/0.log" Jan 29 17:31:06 crc kubenswrapper[4813]: I0129 17:31:06.240735 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:31:06 crc kubenswrapper[4813]: E0129 17:31:06.241537 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:31:06 crc kubenswrapper[4813]: E0129 17:31:06.244411 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.431088 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lfj4c/must-gather-c8855"] Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.431859 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lfj4c/must-gather-c8855" podUID="5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" containerName="copy" containerID="cri-o://be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27" gracePeriod=2 Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.438716 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lfj4c/must-gather-c8855"] Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.841443 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lfj4c_must-gather-c8855_5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb/copy/0.log" Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.842361 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.926068 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output\") pod \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.926272 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwrp6\" (UniqueName: \"kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6\") pod \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\" (UID: \"5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb\") " Jan 29 17:31:07 crc kubenswrapper[4813]: I0129 17:31:07.934160 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6" (OuterVolumeSpecName: "kube-api-access-nwrp6") pod "5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" (UID: "5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb"). InnerVolumeSpecName "kube-api-access-nwrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.021034 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" (UID: "5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.027978 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwrp6\" (UniqueName: \"kubernetes.io/projected/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-kube-api-access-nwrp6\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.028017 4813 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:08 crc kubenswrapper[4813]: E0129 17:31:08.245656 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.249524 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" path="/var/lib/kubelet/pods/5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb/volumes" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.528228 4813 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lfj4c_must-gather-c8855_5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb/copy/0.log" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.528710 4813 generic.go:334] "Generic (PLEG): container finished" podID="5a3f1f86-ade2-4da3-9cd7-ea02dc2e58bb" containerID="be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27" exitCode=143 Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.528763 4813 scope.go:117] "RemoveContainer" containerID="be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.528774 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lfj4c/must-gather-c8855" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.550780 4813 scope.go:117] "RemoveContainer" containerID="caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.608409 4813 scope.go:117] "RemoveContainer" containerID="be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27" Jan 29 17:31:08 crc kubenswrapper[4813]: E0129 17:31:08.608974 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27\": container with ID starting with be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27 not found: ID does not exist" containerID="be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.609041 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27"} err="failed to get container status \"be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27\": rpc error: code = NotFound desc = could not find container \"be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27\": container with ID starting with be425e9b3a60a68fbca1130d3a2d7bbc077a29c4f9ef1c4209e9f72a15efeb27 not found: ID does not exist" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.609330 4813 scope.go:117] "RemoveContainer" containerID="caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac" Jan 29 17:31:08 crc kubenswrapper[4813]: E0129 17:31:08.609917 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac\": container with ID starting with caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac not found: ID does not exist" containerID="caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac" Jan 29 17:31:08 crc kubenswrapper[4813]: I0129 17:31:08.609973 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac"} err="failed to get container status \"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac\": rpc error: code = NotFound desc = could not find container \"caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac\": container with ID starting with caca093e1fe55fa722784ef55905f24d5816731b808ed8734cf78cc82511c6ac not found: ID does not exist" Jan 29 17:31:12 crc kubenswrapper[4813]: E0129 17:31:12.241466 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:31:17 crc kubenswrapper[4813]: I0129 17:31:17.239614 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:31:17 crc kubenswrapper[4813]: E0129 17:31:17.240168 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:31:18 crc kubenswrapper[4813]: E0129 17:31:18.250869 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:31:23 crc kubenswrapper[4813]: I0129 17:31:23.646710 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" containerID="05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e" exitCode=0 Jan 29 17:31:23 crc kubenswrapper[4813]: I0129 17:31:23.646793 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerDied","Data":"05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e"} Jan 29 17:31:24 crc kubenswrapper[4813]: E0129 17:31:24.241381 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:31:24 crc kubenswrapper[4813]: I0129 17:31:24.703298 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerStarted","Data":"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a"} Jan 29 17:31:24 crc kubenswrapper[4813]: I0129 17:31:24.730255 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6f6ng" podStartSLOduration=2.598029265 podStartE2EDuration="1m30.730236399s" podCreationTimestamp="2026-01-29 17:29:54 +0000 UTC" firstStartedPulling="2026-01-29 17:29:56.005628283 +0000 UTC m=+3648.492831499" lastFinishedPulling="2026-01-29 17:31:24.137835417 +0000 UTC m=+3736.625038633" observedRunningTime="2026-01-29 17:31:24.724484115 +0000 UTC m=+3737.211687341" watchObservedRunningTime="2026-01-29 17:31:24.730236399 +0000 UTC m=+3737.217439615" Jan 29 17:31:29 crc kubenswrapper[4813]: I0129 17:31:29.239568 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:31:29 crc kubenswrapper[4813]: E0129 17:31:29.241033 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:31:30 crc kubenswrapper[4813]: E0129 17:31:30.254762 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:31:34 crc kubenswrapper[4813]: I0129 17:31:34.397643 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:34 crc kubenswrapper[4813]: I0129 17:31:34.398950 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:34 crc kubenswrapper[4813]: I0129 17:31:34.458358 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:34 crc kubenswrapper[4813]: I0129 17:31:34.809786 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:34 crc kubenswrapper[4813]: I0129 17:31:34.854549 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:31:36 crc kubenswrapper[4813]: I0129 17:31:36.780567 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6f6ng" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" containerName="registry-server" containerID="cri-o://ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a" gracePeriod=2 Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.240597 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.381052 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content\") pod \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.381228 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities\") pod \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.381291 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b87k7\" (UniqueName: \"kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7\") pod \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\" (UID: \"9ee66b96-b49f-490d-94ca-d0c9d5736dad\") " Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.382256 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities" (OuterVolumeSpecName: "utilities") pod "9ee66b96-b49f-490d-94ca-d0c9d5736dad" (UID: "9ee66b96-b49f-490d-94ca-d0c9d5736dad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.388713 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7" (OuterVolumeSpecName: "kube-api-access-b87k7") pod "9ee66b96-b49f-490d-94ca-d0c9d5736dad" (UID: "9ee66b96-b49f-490d-94ca-d0c9d5736dad"). InnerVolumeSpecName "kube-api-access-b87k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.424873 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ee66b96-b49f-490d-94ca-d0c9d5736dad" (UID: "9ee66b96-b49f-490d-94ca-d0c9d5736dad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.482591 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.482620 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ee66b96-b49f-490d-94ca-d0c9d5736dad-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.482631 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b87k7\" (UniqueName: \"kubernetes.io/projected/9ee66b96-b49f-490d-94ca-d0c9d5736dad-kube-api-access-b87k7\") on node \"crc\" DevicePath \"\"" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.789584 4813 generic.go:334] "Generic (PLEG): container finished" podID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" containerID="ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a" exitCode=0 Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.789629 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerDied","Data":"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a"} Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.789658 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f6ng" event={"ID":"9ee66b96-b49f-490d-94ca-d0c9d5736dad","Type":"ContainerDied","Data":"d03b5bc0d2bb2051dcd032c73933bd214ccdeb4cab1f710a7b36b577d0a21cff"} Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.789673 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f6ng" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.789682 4813 scope.go:117] "RemoveContainer" containerID="ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.812491 4813 scope.go:117] "RemoveContainer" containerID="05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.836320 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.843526 4813 scope.go:117] "RemoveContainer" containerID="7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.846658 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6f6ng"] Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.859430 4813 scope.go:117] "RemoveContainer" containerID="ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a" Jan 29 17:31:37 crc kubenswrapper[4813]: E0129 17:31:37.859930 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a\": container with ID starting with ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a not found: ID does not exist" containerID="ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.860011 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a"} err="failed to get container status \"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a\": rpc error: code = NotFound desc = could not find container \"ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a\": container with ID starting with ddd56daaa85e9a6e4c4286faf777423a79b3749c75bcc795f7912f22ec18fb1a not found: ID does not exist" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.860084 4813 scope.go:117] "RemoveContainer" containerID="05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e" Jan 29 17:31:37 crc kubenswrapper[4813]: E0129 17:31:37.860391 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e\": container with ID starting with 05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e not found: ID does not exist" containerID="05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.860411 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e"} err="failed to get container status \"05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e\": rpc error: code = NotFound desc = could not find container \"05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e\": container with ID starting with 05013ec87af43f9fad6952b0783319183eb57eacd0fa785fb5a4cc1c93beec4e not found: ID does not exist" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.860424 4813 scope.go:117] "RemoveContainer" containerID="7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722" Jan 29 17:31:37 crc kubenswrapper[4813]: E0129 17:31:37.860726 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722\": container with ID starting with 7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722 not found: ID does not exist" containerID="7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722" Jan 29 17:31:37 crc kubenswrapper[4813]: I0129 17:31:37.860799 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722"} err="failed to get container status \"7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722\": rpc error: code = NotFound desc = could not find container \"7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722\": container with ID starting with 7817e5e97caeb9b07f61c2a56b4eb0137653d66d4430be80ac72b5cb6a4d9722 not found: ID does not exist" Jan 29 17:31:38 crc kubenswrapper[4813]: I0129 17:31:38.247161 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee66b96-b49f-490d-94ca-d0c9d5736dad" path="/var/lib/kubelet/pods/9ee66b96-b49f-490d-94ca-d0c9d5736dad/volumes" Jan 29 17:31:39 crc kubenswrapper[4813]: E0129 17:31:39.241870 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:31:42 crc kubenswrapper[4813]: I0129 17:31:42.239923 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:31:42 crc kubenswrapper[4813]: E0129 17:31:42.240260 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:31:43 crc kubenswrapper[4813]: E0129 17:31:43.241384 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:31:53 crc kubenswrapper[4813]: E0129 17:31:53.241384 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" Jan 29 17:31:57 crc kubenswrapper[4813]: I0129 17:31:57.240030 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:31:57 crc kubenswrapper[4813]: E0129 17:31:57.240800 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:31:58 crc kubenswrapper[4813]: E0129 17:31:58.245700 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" Jan 29 17:32:07 crc kubenswrapper[4813]: I0129 17:32:07.241819 4813 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 17:32:08 crc kubenswrapper[4813]: I0129 17:32:08.246383 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:32:08 crc kubenswrapper[4813]: E0129 17:32:08.247171 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:32:09 crc kubenswrapper[4813]: I0129 17:32:09.025846 4813 generic.go:334] "Generic (PLEG): container finished" podID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" containerID="3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa" exitCode=0 Jan 29 17:32:09 crc kubenswrapper[4813]: I0129 17:32:09.025906 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerDied","Data":"3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa"} Jan 29 17:32:10 crc kubenswrapper[4813]: I0129 17:32:10.034828 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerStarted","Data":"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf"} Jan 29 17:32:10 crc kubenswrapper[4813]: I0129 17:32:10.063302 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vcx7d" podStartSLOduration=2.079020001 podStartE2EDuration="5m42.063280102s" podCreationTimestamp="2026-01-29 17:26:28 +0000 UTC" firstStartedPulling="2026-01-29 17:26:29.64879314 +0000 UTC m=+3442.135996356" lastFinishedPulling="2026-01-29 17:32:09.633053231 +0000 UTC m=+3782.120256457" observedRunningTime="2026-01-29 17:32:10.057403245 +0000 UTC m=+3782.544606471" watchObservedRunningTime="2026-01-29 17:32:10.063280102 +0000 UTC m=+3782.550483338" Jan 29 17:32:12 crc kubenswrapper[4813]: I0129 17:32:12.059033 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerStarted","Data":"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201"} Jan 29 17:32:13 crc kubenswrapper[4813]: I0129 17:32:13.067458 4813 generic.go:334] "Generic (PLEG): container finished" podID="11f96ea5-984e-41c6-ad14-0ce689ec882f" containerID="d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201" exitCode=0 Jan 29 17:32:13 crc kubenswrapper[4813]: I0129 17:32:13.067505 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerDied","Data":"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201"} Jan 29 17:32:14 crc kubenswrapper[4813]: I0129 17:32:14.076331 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerStarted","Data":"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575"} Jan 29 17:32:14 crc kubenswrapper[4813]: I0129 17:32:14.102155 4813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sdxc" podStartSLOduration=2.318511967 podStartE2EDuration="5m45.102115032s" podCreationTimestamp="2026-01-29 17:26:29 +0000 UTC" firstStartedPulling="2026-01-29 17:26:30.659106171 +0000 UTC m=+3443.146309407" lastFinishedPulling="2026-01-29 17:32:13.442709256 +0000 UTC m=+3785.929912472" observedRunningTime="2026-01-29 17:32:14.094665631 +0000 UTC m=+3786.581868867" watchObservedRunningTime="2026-01-29 17:32:14.102115032 +0000 UTC m=+3786.589318258" Jan 29 17:32:18 crc kubenswrapper[4813]: I0129 17:32:18.880796 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:18 crc kubenswrapper[4813]: I0129 17:32:18.881230 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:18 crc kubenswrapper[4813]: I0129 17:32:18.941017 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:19 crc kubenswrapper[4813]: I0129 17:32:19.152880 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:19 crc kubenswrapper[4813]: I0129 17:32:19.201163 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:32:19 crc kubenswrapper[4813]: I0129 17:32:19.926951 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:19 crc kubenswrapper[4813]: I0129 17:32:19.926999 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:19 crc kubenswrapper[4813]: I0129 17:32:19.967939 4813 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:20 crc kubenswrapper[4813]: I0129 17:32:20.160890 4813 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:20 crc kubenswrapper[4813]: I0129 17:32:20.240219 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:32:20 crc kubenswrapper[4813]: E0129 17:32:20.240454 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.130868 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vcx7d" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" containerName="registry-server" containerID="cri-o://437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf" gracePeriod=2 Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.518549 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.579987 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.686817 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities\") pod \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.686877 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l7mr\" (UniqueName: \"kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr\") pod \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.686911 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content\") pod \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\" (UID: \"483ea71b-33ae-40f2-9120-3b22c0f6ff21\") " Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.688523 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities" (OuterVolumeSpecName: "utilities") pod "483ea71b-33ae-40f2-9120-3b22c0f6ff21" (UID: "483ea71b-33ae-40f2-9120-3b22c0f6ff21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.693062 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr" (OuterVolumeSpecName: "kube-api-access-5l7mr") pod "483ea71b-33ae-40f2-9120-3b22c0f6ff21" (UID: "483ea71b-33ae-40f2-9120-3b22c0f6ff21"). InnerVolumeSpecName "kube-api-access-5l7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.733441 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "483ea71b-33ae-40f2-9120-3b22c0f6ff21" (UID: "483ea71b-33ae-40f2-9120-3b22c0f6ff21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.789189 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l7mr\" (UniqueName: \"kubernetes.io/projected/483ea71b-33ae-40f2-9120-3b22c0f6ff21-kube-api-access-5l7mr\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.789245 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:21 crc kubenswrapper[4813]: I0129 17:32:21.789263 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ea71b-33ae-40f2-9120-3b22c0f6ff21-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139407 4813 generic.go:334] "Generic (PLEG): container finished" podID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" containerID="437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf" exitCode=0 Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139469 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerDied","Data":"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf"} Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139538 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vcx7d" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139547 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vcx7d" event={"ID":"483ea71b-33ae-40f2-9120-3b22c0f6ff21","Type":"ContainerDied","Data":"cd7611a86e0ecf3a4b25edd2f9759a6c5f58dd33c5d7b964f629d0324fa33f43"} Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139578 4813 scope.go:117] "RemoveContainer" containerID="437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.139656 4813 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sdxc" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" containerName="registry-server" containerID="cri-o://a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575" gracePeriod=2 Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.164100 4813 scope.go:117] "RemoveContainer" containerID="3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.176940 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.181081 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vcx7d"] Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.239041 4813 scope.go:117] "RemoveContainer" containerID="7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.260925 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483ea71b-33ae-40f2-9120-3b22c0f6ff21" path="/var/lib/kubelet/pods/483ea71b-33ae-40f2-9120-3b22c0f6ff21/volumes" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.300311 4813 scope.go:117] "RemoveContainer" containerID="437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf" Jan 29 17:32:22 crc kubenswrapper[4813]: E0129 17:32:22.300781 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf\": container with ID starting with 437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf not found: ID does not exist" containerID="437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.300821 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf"} err="failed to get container status \"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf\": rpc error: code = NotFound desc = could not find container \"437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf\": container with ID starting with 437dec492e660b55a1ed455a1a6828452e4b2a45419c5d273a4167aae5e0dbcf not found: ID does not exist" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.300846 4813 scope.go:117] "RemoveContainer" containerID="3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa" Jan 29 17:32:22 crc kubenswrapper[4813]: E0129 17:32:22.301280 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa\": container with ID starting with 3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa not found: ID does not exist" containerID="3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.301349 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa"} err="failed to get container status \"3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa\": rpc error: code = NotFound desc = could not find container \"3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa\": container with ID starting with 3a8b86b56a4e300f6106febd7a42c2d5b7f7f2736409c2b247dafe86b69d42aa not found: ID does not exist" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.301375 4813 scope.go:117] "RemoveContainer" containerID="7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16" Jan 29 17:32:22 crc kubenswrapper[4813]: E0129 17:32:22.301688 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16\": container with ID starting with 7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16 not found: ID does not exist" containerID="7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.301713 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16"} err="failed to get container status \"7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16\": rpc error: code = NotFound desc = could not find container \"7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16\": container with ID starting with 7bb8b17878865d32ad12003c9aa8625249b131aa7ee6b41caa4a971e8abf9b16 not found: ID does not exist" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.624846 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.802407 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv5nq\" (UniqueName: \"kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq\") pod \"11f96ea5-984e-41c6-ad14-0ce689ec882f\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.802508 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content\") pod \"11f96ea5-984e-41c6-ad14-0ce689ec882f\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.802602 4813 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities\") pod \"11f96ea5-984e-41c6-ad14-0ce689ec882f\" (UID: \"11f96ea5-984e-41c6-ad14-0ce689ec882f\") " Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.803668 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities" (OuterVolumeSpecName: "utilities") pod "11f96ea5-984e-41c6-ad14-0ce689ec882f" (UID: "11f96ea5-984e-41c6-ad14-0ce689ec882f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.819488 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq" (OuterVolumeSpecName: "kube-api-access-rv5nq") pod "11f96ea5-984e-41c6-ad14-0ce689ec882f" (UID: "11f96ea5-984e-41c6-ad14-0ce689ec882f"). InnerVolumeSpecName "kube-api-access-rv5nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.835997 4813 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11f96ea5-984e-41c6-ad14-0ce689ec882f" (UID: "11f96ea5-984e-41c6-ad14-0ce689ec882f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.904512 4813 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.904583 4813 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv5nq\" (UniqueName: \"kubernetes.io/projected/11f96ea5-984e-41c6-ad14-0ce689ec882f-kube-api-access-rv5nq\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:22 crc kubenswrapper[4813]: I0129 17:32:22.904605 4813 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11f96ea5-984e-41c6-ad14-0ce689ec882f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.151865 4813 generic.go:334] "Generic (PLEG): container finished" podID="11f96ea5-984e-41c6-ad14-0ce689ec882f" containerID="a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575" exitCode=0 Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.151933 4813 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdxc" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.151952 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerDied","Data":"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575"} Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.152835 4813 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdxc" event={"ID":"11f96ea5-984e-41c6-ad14-0ce689ec882f","Type":"ContainerDied","Data":"502e0960a0e3232b6d9339e410f807f0a92d856ca14751dbed2f303995df0264"} Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.152864 4813 scope.go:117] "RemoveContainer" containerID="a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.178420 4813 scope.go:117] "RemoveContainer" containerID="d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.194510 4813 scope.go:117] "RemoveContainer" containerID="de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.201527 4813 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.212275 4813 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdxc"] Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.225801 4813 scope.go:117] "RemoveContainer" containerID="a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575" Jan 29 17:32:23 crc kubenswrapper[4813]: E0129 17:32:23.226300 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575\": container with ID starting with a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575 not found: ID does not exist" containerID="a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.226470 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575"} err="failed to get container status \"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575\": rpc error: code = NotFound desc = could not find container \"a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575\": container with ID starting with a6e398a59359bfa76ac3d7366c8dff7362a4e306290e8facc1f8a4810671c575 not found: ID does not exist" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.226617 4813 scope.go:117] "RemoveContainer" containerID="d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201" Jan 29 17:32:23 crc kubenswrapper[4813]: E0129 17:32:23.227141 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201\": container with ID starting with d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201 not found: ID does not exist" containerID="d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.227201 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201"} err="failed to get container status \"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201\": rpc error: code = NotFound desc = could not find container \"d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201\": container with ID starting with d2798e5d809528162053a1f60d785ed94896ab94cc6190e952db60e554c79201 not found: ID does not exist" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.227244 4813 scope.go:117] "RemoveContainer" containerID="de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500" Jan 29 17:32:23 crc kubenswrapper[4813]: E0129 17:32:23.227578 4813 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500\": container with ID starting with de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500 not found: ID does not exist" containerID="de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500" Jan 29 17:32:23 crc kubenswrapper[4813]: I0129 17:32:23.227610 4813 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500"} err="failed to get container status \"de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500\": rpc error: code = NotFound desc = could not find container \"de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500\": container with ID starting with de45dbf14db318309d345a5bda2228fbc913f9a68b0fbf1399694528858f7500 not found: ID does not exist" Jan 29 17:32:24 crc kubenswrapper[4813]: I0129 17:32:24.252609 4813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f96ea5-984e-41c6-ad14-0ce689ec882f" path="/var/lib/kubelet/pods/11f96ea5-984e-41c6-ad14-0ce689ec882f/volumes" Jan 29 17:32:32 crc kubenswrapper[4813]: I0129 17:32:32.240706 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:32:32 crc kubenswrapper[4813]: E0129 17:32:32.241454 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:32:43 crc kubenswrapper[4813]: I0129 17:32:43.240763 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:32:43 crc kubenswrapper[4813]: E0129 17:32:43.242066 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:32:55 crc kubenswrapper[4813]: I0129 17:32:55.239737 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:32:55 crc kubenswrapper[4813]: E0129 17:32:55.241634 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:33:06 crc kubenswrapper[4813]: I0129 17:33:06.239744 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:33:06 crc kubenswrapper[4813]: E0129 17:33:06.240781 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" Jan 29 17:33:20 crc kubenswrapper[4813]: I0129 17:33:20.240202 4813 scope.go:117] "RemoveContainer" containerID="f2b992b56907a92ef60e1bc0e1100f24bf80344177c988663f90c08f87ba28bf" Jan 29 17:33:20 crc kubenswrapper[4813]: E0129 17:33:20.241101 4813 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-r269r_openshift-machine-config-operator(71cdf350-59d3-4d6f-8995-173528429b59)\"" pod="openshift-machine-config-operator/machine-config-daemon-r269r" podUID="71cdf350-59d3-4d6f-8995-173528429b59" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136714765024464 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136714766017402 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136704763016521 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136704764015472 5ustar corecore